The person you describe as knowing sounds awfully like Caroline Ellison, in which case, let me say that I refuse to believe that she could have acted on bad faith until I am given overwhelming evidence to the contrary. On the contrary, the impression I got is of a true believer, and a good person. This does not preclude the possibility that, under circumstances of a certain naiveté and inexperience in a field as murky as crypto, she might have let herself go along with what she might have perceived as temporary and 'bad' expedient means. But to believe this person ever intended to purposely and maliciously scam people our of their money or be privy to a fraud is, for me, completely out of the question. I believe the best option is to be charitable and await to see what the courts of law have to say once the dust has settled.
Here's the rub. Bankman Fried and his woman look like goblins. Human beings instinctively recoil from goblins. Rationalist utilitarians say 'no, there's no rational reason to recoil from people who look like goblins'. But there is.
Now, it's theoretically possible to construct a version of utilitarianism that would be sufficiently inclusive of both heuristic rules and the dark, dark secrets of HBD and psychology. But the problem is that it would be very complicated and the whole point of utilitarianism is to simplify morality. So, in practice, rationalist utilitarianism always ends up saying some version of 'no, there's no rational reason to recoil from people who look like goblins'. But, to reiterate, there is.
Reading this week’s Douthat column I kind of regret rage canceling my paid subscription to Scott’s Substack. “You won’t get a refund said Substack. No more hidden OTs for you.” “I don’t care!” I replied.
I am curious how Scott will address the appeal for a bit of perhaps less than maximally effective altruism tho.
It was an earlier essay where Scott made the point that replacing millions of Japanese with people from Sub Saharan Africa would change the country’s culture.
He was right of course. If you replace a large portion of descendants of a culture that dates back to antiquity with people unfamiliar with that culture, things are going to change. But why Africans in particular? A substitution of Englishmen or Germans or anyone else would have a big impact on a mature culture.
It seemed like a proxy for White Replacement Theory in the US, something I don’t at all worry about. I could have been reading that wrong of course, but that was how it struck me at the time.
I talked myself down from that ledge eventually but had already canceled my paid subscription. It was futile gesture because Substack had already collected for a year’s subscription. Just one more in a lifetime of futile gestures. I suppose I run a bit hot on some long held beliefs.
I need advice for a friend who is in ungodly amounts of pain. I am thinking about the SSC article on pseudoaddiction- miner who takes opioids for years for horrible mining injuries speaks brusquely to hospital staff, gets his opioids taken away, shoots himself in the chest, miraculously survives, etc. My friend has been taking opioids for 6 years after a horrific car accident, and their doctor is threatening to take them away. What should they do?
Your friend can point out to their physician that the CDC has changed their opioid guidelines on November 4th, 2022 (https://www.cdc.gov/mmwr/volumes/71/rr/rr7103a1.htm). Specifically, they have moved away from titrating or weaning long term opioids especially if they are well tolerated. Sudden cessation of opioids is increasingly identified as patient abandonment and can theoretically be reported to the state board.
I wish I had a better answer for you. We are in a period of reaction regarding opioids as I’m sure you know. My wife had out patient knee replacement surgery a year ago and she had to endure a lecture about how half of West Virginia is on heroin now and they didn’t *start* with heroin. The spiel went along the lines of “Studies have shown that acetaminophen and ibuprofen are just as effective,” at that point I pointed out that she can’t take ibuprofen because of an ulcer but that didn’t slow her down. “… aroma therapy and meditation can be used to control pain too….” It went on and on as they tried to hustle my wife out of bed and send her home. It got to the point where I had to interrupt the nurse and ask “Shouldn’t you be giving this lecture to the Sackler family?” She was wearing a Covid mask but I could see her bite her lower lip.
If bad comes to worse for your friend he might try kratom if is legal where he is. Some forms are supposed to relieve pain in the same manner as opioids. It’s a non optimal solution of course. He’d have know way of knowing for sure what he was ingesting, no government regulation of kratom and the US has been trying to make it’s sale illegal but it sounds like your friend is in a tough spot and might want to consider it.
This raises the question of "why" which I doubt you could supply in any case. Answers including agreeing to stop going in early for repeats (maybe signing a pain management contract stipulating this), asking for a slow taper, perhaps sufficiently so to allow time for referral to another prescriber, showing up at appointments having made some obvious efforts at self-care, etc. that suggest the opiates are actually objectively improving his quality of life (it doesn't sound like they are here).
While those are certainly possibilities, I know for a fact people get arbitrary cut off from years of necessary medication: my mom was one of them. Not only did they never ask for an additional refill, it would be preposterous to assume that someone with extensive damage to their back (three herniated disks) wasn't warranted in asking to receive opoids. They were subjected to forced taper offs; recommended to consider surgery (they did not want to); and at their most vulnerable moments were reduced to lifelessly slouching on a plush leather chair.
It was definitely a formative experience for me. Before then, I worked from the assumption that failure to act (both politically and personally) was the most consequential, the impact of those aforementioned actions weren't first priority to me. That's not to say none of it mattered at all, it just felt abstract to me; it didn't resonate. Part of that probably has to do with my political convictions (on a side note, I don't know if I can mention that on this thread, not sure whether it's odd or even numbered threads politics can be discussed; if I can't I'll edit it out in post).
Now though I believe both aspects are equally as important (and to anyone wondering, my mom eventually did get a suitable replacement; it wasn't opoids unfortunately but they're now able to comfortably function).
I may post this again, but I wanted to get it said. This might be an added explanation for research slowing down, but I don't know whether it's as bad in the sciences as it is in the humanities.
The short version is that it's not just bad at amazon and google, search has become relatively useless at academic sources.
Here's how I got past a cataloguing issue. I'd heard for years that things had gotten better for years for Jews in Germany, especially in the Weimar Republic. I realized there was a story there, but what was it?
Searching on Google didn't help. I was just getting anti-semitic stuff.
I think it was a couple of years, and google changed its policies. Now I was getting stuff *about* Nazis. They're more interesting than gradual legal change.
Finally I asked people. I got pointed at the emancipation of the Jews and a book called The Pity of It All.
The moral of this story is that you may have to ask people because computer search isn't working. It's like being in the middle ages or something where local and specific knowledge is the essential thing.
This has been a pretty well-known problem in technical circles for years. Do a quick search for "google search" on Hacker News[1], and you get post after post[2] about how search is getting worse. And that doesn't even include the comment threads that pop up on unrelated posts.
It's one of those problems that gets worse the more you know about a topic, because google has trended towards searching for what it thinks you mean, rather than what you actually asked for (which is great for drunk people trying to figure out "that guy from that movie from the thing", less so for finding documentation or if you actually know what you're looking for).
It's also been overtaken by SEO blogspam - low quality, often GPT generated articles that use a 2000 word preamble to answer a question that takes 3 words, or content that is literally scraped and copy-pasted from websites like StackOverflow.
And, of course, there's the fact that you can't find any organic information about *any* product, and you have to append "reddit" to get anything besides adtech vomit in your searches.
If people aren't getting great search results anyway, they may consider at least getting private results. The main option for that is DuckDuckGo but it has newer competitors such as Brave Search and Neeva (ecosia is also much more private than Google, although not as private as DDG or Brave Search).
DDG generally works pretty well for me. When it doesn't find what I'm looking for, I'll try Startpage, which usually works. Questions have been raised about the Startpage and privacy, which is why I primarily use DDG, but Startpage could hardly be worse than Google (or Bing).
Notably, DDG (and Brave Search) have "bangs" that allow you to initiate a search in another website, so you can do a Google search through DDG, by appending !g to your search query, for example. That would no longer be private, but it can be more convenient than first navigating to google.com.
Kagi search is worth trying, their free version is pretty limited but it's pretty impressive. It's also using an in-house index, it's not just a frontend for Bing like DDG.
I've also noticed a significant decline in search quality, particularly with Google/Youtube, that's happened fairly continuously over the last 10 or so years. I think that the issue mainly comes from trying too hard to make the algorithm 'smart', and focusing on the wrong things.
For example, on youtube. when I first started making youtube videos in the late 2000s, my videos would get tens of thousands of views. I don't have aspirations of stardom, but I do reasonably think that my audience worldwide, that is, people who would be interested in my content, is that big. In the mid 2010s, i'd get a few hundred views if lucky. Nowadays I'm lucky if I break into the double digits. What happened? Why are the people who want to find my stuff not finding it?
Basically, Youtube decided that it didn't want to neutrally give search results that closely matched what you typed in. It wanted to show you popular things that kind of matched what you typed in. It's not trying to show you the best result; it's trying to find an *excuse* to show you something that *it* wants to show you. This is really bad for several reasons, but the main reason is that if you are looking for a small signal, it will get drowned out every time by the closest large signal. This was not a problem before, because you could tell it to neutrally and unbiasedly just give you things that exactly matched what you put in.
The other problem is that the internet has changed. It used to be a place where weird or forward-thinking people were doing things that interested them. Now it is where most of the world's business is done, and also the de facto public square. And what was once great about youtube for example was that someone just messing around with a camera or talking in an unscripted way about something they liked could get seen and interacted with, without a lot of time wasted on making fancy powerpoint-style presentations. Now it's all people who are vying to make money on youtube, creating overproduced content with fancy studio lights and tight scripting.
While on the topic of DSL, is it possible for the admins (cc obermot) to reinstate the ability to ignore threads? This was a useful feature that disappeared without an explanation.
ACX about to experience what it's like when the weird offshoot branch of the family nobody really talks to anymore suddenly shows up uninvited to the family reunion :)
I only know DSL as an acronym for Digital Subscriber Line, which is an old technology for getting onto the Internet. If you mean it this way then this will be specific to your ISP (Internet Service Provider). If you mean something else it will help if you expand the acronym.
Sorry - especially because it always bugs me when people use acronyms I'm unfamiliar with.
I mean Datasecretslox, the Bulletin board Scott always mentions at the beginning of his open threads. It's been down for quite a few hours and thats never happened before.
I mean, the worst offenders, like the NYT, are cherry-picked, but the fact that the with-kid-gloves NYT article came out right after it got leaked that the NYT[1] has had an explicit "no positive coverage for tech" policy is worrying enough on its own.
I saw someone on twitter comment that the flurry of puff pieces and interviews was like indirectly observing the dark matter of some PR agency, and that rings true.
[1] And by extension, probably most media companies, since their ownership charts are incestuous
No, but I do trust the aggregate of the many rat- and postrat- adjacent accounts on twitter that were saying the exact same thing. Also, this is the comment section to be defending the NYT in lol. Of your list, the only one I remotely trust not to have a pro-finance bias is Reuters.
I'm pretty sure SBF et al were running their business in an exceedingly casual and reckless manner from the start. And I'm also pretty sure that their attitude towards e.g. securities and banking law was "bunch of unimaginative suits who will just get in the way of our legitimate business; the less they know the better". Which is often technically illegal. But I don't think they crossed the line into unambiguous cheating-our-customers-out-of-what-we-promised-them scamming until the last few months or so, probably about the time Alameda had its liquidity crisis and needed FTX's client funds to bail them out.
And even then, I think they were optimistically hoping that they'd do a bit of quick what-most-of-us-call-scamming, then double their money through super genius expert trading and refill the customer accounts before anybody noticed, then get back to their basically honest but casual and reckless business. Until the next time.
It also wouldn't surprise me if there was a *last* time, when they used their customer accounts to bail themselves out and then *did* make the money back before anybody noticed, but that may be hard to figure out. I think the records of t hat are in the heap under the beanbag chair in the orgy room.
I'm not sure the man can reliably distinguish between a scam and an ordinary trading business, so it might be the case that the distinction about which you're asking is not one he could make, which means whatever his motivations were they need not have changed markedly on crossing a line visible only to others (between 'financial trading biz' and 'scam').
After all, pretty much all trading business is a series of bets, and in each bet there is usually a winner and a loser. I bought a bunch of XOM last year at 35 because I thought that price was definitely going to up, and the fact that it's 113 today means (at least so far) I was right and I'm a winner, but that also means all the people from who I bought the stock last were wrong and are losers. Some of their money has been transferred to me, and not because I earned it by doing work for them, but just because they took the losing side of a bet with me.
We don't consider this a fraud, scam, or theft, because of certain bright lines we draw in our heads about what is a "fair" bet between consenting participants and what is not, e.g. we require the bet to be made with the full knowledge of the owners of the capital at stake, we require nobody to have any unusual not generally available information, we require nobody to be in a position of authority that could influence the outcome of the bet, everyone has to be an adult, et cetera.
But many of these lines are in practice somewhat arbitrary, and why we draw this line versus that, and why here versus there is often a bit fuzzy, with relatively dubious precise justification[1]. Nevertheless, most of us believe there *are* bright lines that separate "fair" from "unfair" or "criminal", even if the actual legal lines aren't drawn precisely in the right place.
There are other people, however, who see the arbitrariness, however modest, and extrapolate from its existence to the general conclusion that *any* bright line is just arbitrary bullshit, and there really isn't any important moral difference between ordinary trading and what some unenlightened dummies would provocatively call "a scam." I think people who end up running scams in what looks like an accidental way, just kind of wandering heedlessly across some bright line or other, are probably often in this category. Because they don't much believe in the existence of bright lines, they don't take ordinary care to stay on one side of them.
People who run Ponzi or Nigerian prince scams from the get-go by contrast probably totally grok bright lines, and presumably decide to just cross into the Neutral Zone deliberately because it will be personally profitable.
----------------
[1] And to be fair, I think sometimes the arbitrariness of the lines can sometimes ensnare genuinely innocent people. I've always been a little dubious that Martha Stewart was guilty of insider trading, although I could be wrong about that of course.
Your logic makes no sense. By your logic, if you ever sell your stock and it continues to rise, then you are a loser. Actually, if there exists any other investment that would have made more money than your purchase of stock, you are a loser.
Clearly this is wrong. The winner or loser status of the person whose stock you purchased must be based on the performance (relative to expectations) of the stock between their purchase and sale and not after they sold.
I could probably have phrased it better by saying that people voluntarily surrended their right to a future stream of income to me for a price that turned out to significantly undervalue the value of that income. How you want to phrase that in accounting terms would probably be a matter of taste, about which I have no strong opinion because I find the question a bit OCD and uninteresting, but I'm pretty sure to most people it would feel like "a loss" and taking the other side certainly feels like a gain to me.
If you mistakenly think your house is infested with termites and must be torn down, and I come along and, taking advantage of your bad judgment, buy your house for 25 cents on the dollar, far below its market value after I have it inspected and prove there are no termites, do you still think value hasn't been transferred from you to me? Same idea.
And some people would say the latter example is a "fair" deal, because you could've had it inspected yourself, and others would say it's "unfair" if, for example, you are actually mentally ill and you think it's infested by Martian termites who can turn themselves invisible to evade inspection -- and therefore I took cruel advantage of a disability. That relates to the larger and more interesting issue, which is that there are bright lines that separate "sharp business deal" from "fraud", but (1) we're in general little fuzzy and arbitrary about where we draw them, and (2) some people of a rigid or antisocial mindset argue the imprecision proves the lines per se are bullshit, and so there *isn't* any important difference between "sharp business deal" and "fraud." Our Hero SBF may be one of them.
The news that keeps coming out on this looks worse and worse. These were absolutely not innocent naive dufuses who stumbled into incredibly serious crimes.
His interview with Vox (and god, what a hilariously bad idea that was) gives me the impression of stupidity rather than malice. Like, just so careless with the exchange that he didn't know or care where the money was going until a bank run forced him to actually sit down and count up where all the money had gone.
He does say some "meh, ethical investing is just a sham" stuff which might make me lean towards an intentional scam, but taking the interview as a whole I'm inclined to read it as an after-the-fact defense - "everyone's a scammer, so what I did wasn't that bad" - rather than an admission that the whole thing was planned from the start.
(Not that I'm trying to defend him. Even his own description of his errors is well into "Sufficiently advanced stupidity is indistinguishable from malice" territory. But I don't think he ever had the clarity to say to himself "this is definitely violating the law but it's going to make me enough money that I don't care.")
To be overly technical, SBF didn't run a scam. He ran a legitimate business (effectively a currency exchange) and stole from it. I think this answers your question: he started out running a legitimate currency exchange but had such little moral fiber that once a lot of people trusted him with their money he just stole it for his own purposes.
This doesn't seem to be the case based on new statements from the company after the SBF orgy club was removed. This was barely a company at all. They didn't keep records, in fact SBF encouraged them to use auto-delete communication methods, they were fake audited by a fake audit company or something, they don't know who their creditors or debtors are, where their money is, or what their assets and liabilities are. Their subsidiaries were not set up right, either.
Preface: FTX and SBF seem to have almost certainly engaged in fraud. The record keeping seems either atrocious or a deliberately obscuring the real record. That said,
It's standard practice to encourage employees to use less permanent media (or ideally, face to face communication) for any communication that might have any legal bearing. There is training to this effect at all the companies I've worked at. Perhaps this is shady, but it is also pretty normal.
If so I might be mistaken. But I was under the impression it did have a functioning platform which is why people still have money in it and are trying to get it out.
That everything was done in a wrong/criminal/etc way: Yes, for sure.
My guess is that this is a boringly simple case of "person has large amounts of money to invest, person makes bad investment decisions, person loses money, person then makes set of riskier and riskier gambles to try to get back above water, compounding the losses." In this case, with larger numbers than usual because there has been a ton of capital sloshing around trying to find good investment opportunities in the last few years.
I'm guessing it didn't start as a scam, and mostly wasn't one. For whatever reason his hedge fund run into real trouble, and thought up this one clever clever hack using funds from his crypto exchange to save it. And since everything he was doing was for the greater good anyway, he decided not to let oldfashioned rules stop him.
I know the issue of 'use cases' with crypto has been beaten to death ,so I apologize in advance for the redundancy, but I would like to ask the smart people on this chat this: Is crypto the first example of an innovation/commodity for which the garden variety champion cannot explain the 'use case' to the typical rube?
Self disclosure: In this context (and others, without question, but those aren't relevant here) I am the rube. And I have read all manners of interviews with the likes of SBF and the desperately malnourished kid who started Ethereum, and whenever the question of 'so what is it really good for' comes up we get the inevitable 'that's a really good question!' (to all who have been to an academic conference, feel free to laugh with me!) and then a bunch of 'blah blah blah decentralized blah blah blah' and we move on to the next question.
So- I'm not here to argue that crypto 'doesn't' have a use case, because it absolutely might and I can see some distinct paths where it does. But in terms of explaining it to the average guy, I think it's fallen laughably short. Whichmy eye at least is an interesting feature of this commodity, since I can't think of another example of an asset that has this unique property.
Am I wrong? Have there been others? If not, is this an augur for what's to come (i.e., more assets that end up worth more than the GDP of Brazil but that nobody can clearly explain how they will improve our lives in the short-to-medium term)? Or if so, what were they and what happened to them?
I think the "typical rube" can understand money laundering easily enough. The hard part is for champions to make money laundering into something the typical rube will support.
The typical rube *also* understands speculative bubbles in assets of dubious fundamental value, but it's probably a lost cause to try and make that sound good. The money laundering, you can at least point to people trying to flee oppressive regimes or to buy medical marijuana (adderall, whatever) in Red States.
> Is crypto the first example of an innovation/commodity for which the garden variety champion cannot explain the 'use case' to the typical rube?
No. It isn't. It's actually common.
Electricity, to go back almost two hundred years, was also quite mysterious to the average person. And simultaneous with it being used for things that laid the groundwork for modern electric grids you had grifters claiming it was magic. One woman in Paris told people it had magical powers. People would pay to sit near an electric engine. She would have assistants rub water on skin and then shock people for supposed health benefits. Etc
Now, you might be saying that lighting a building up is an obvious benefit. And it is. But it took over sixty years to get from the first economical electric engine to the first electricity lighted home.
This is entirely typical. There were people saying Google was too complex to explain compared to things that more closely resembled index card systems that most people had been trained on. Etc. The idea that innovation is obvious or easily understood comes from media where the genius goes "eureka!" and explains how to solve their problem in simple terms the audience can understand. But it's not how it actually works.
So from the sixty years between the first economical electric engine to the first electricity lighted home, was there a use case for electricity? If so, could it be explained to a rube? I'm not familiar with the history of electricity, but I'm having a hard time thinking of a use case that couldn't be easily explained (e.g. "you know watermills? This lets you grind stuff too, but without water. You can put a 'watermill' anywhere now!")
That's because you're looking backward. What is obvious to you was not obvious to them.
The first electric mill took even longer. The first electric lighthouse took a little less time but not by much. The first commonly comprehensible invention that such electrical generation enabled was telegraphs. And even that took a specialist to understand. And they invented new kinds of fraud such that even now we have statutes on the books such as wire fraud. This was all happening alongside a large number of grifters who claimed electricity could do anything from help you connect with God to predict the future to raise the dead.
You can say, "Ah, but the use of telegraphs is obvious!" Well, it's obvious to you now. It wasn't obvious at the time. The British government repeatedly wholesale rejected the use of the telegraph and it was viewed with significant suspicion at the time. The Royal Navy famously compiled a report where they rejected the innovation as not really adding value. The average rube did not realize the value and it took concerted effort by electrical advocates to drive adoption. (And a significant number of rubes were outright scammed.)
Now, just because electricity was doubted doesn't mean anything that's doubted is valid. That's the same error in reverse. But the fact there's doubt or that non-specialists don't understand it doesn't really mean anything. It's common even for things we think are "obvious" today.
I think I didn't explain myself well. I didn't mean to say that the future uses of electricity were obvious. I meant that, at the time when electric mills, telegraphs, electric lights, etc were in the future, electricity really didn't have many uses. A rube would have been right to be suspicious of anyone selling electricity, because as you said, lots of them were con artists claiming electricity had magical powers and going around shocking people. When electric mills, telegraphs, electric lights, etc *did* get invented, and electricity was no longer just a scam or a curiosity, the use cases *were* easily explainable to rubes.
Let's apply this to crypto. Right now, there are no obvious use cases that are easily explainable to a rube that aren't of debatable value. (The ability for anyone, including criminals, to transfer money without regulation is easily understandable--but also of highly debatable value.) Therefore, rubes are right to conclude that crypto is either a curiosity or a scam. In the future, if and when great use cases for crypto *do* get invented, it'll no longer be just a curiosity or a scam, and the great use cases *will* be easily explainable to rubes.
Now I think I didn't explain myself well either. The telegraph, after it was proven functional and reliable, took decades to drive adoption. The Royal Navy, the most advanced in the world at the time, did a full investigatory report and determined that it was not useful despite acknowledging its literal capabilities. They basically said (as someone said to me yesterday about crypto) faster speeds alone didn't justify the switch. So no, they *weren't* easily explainable or widely accepted. And plenty of scams coexisted with the valid uses. So your application is wrong.
Crypto might be a scam. But the fact the average person is caught by scams or cannot understand it doesn't point one direction or another. And you're attempting to construct an argument on sand that fits what I guess are your preconceptions. Here's a simple argument: the ACH takes 3 days and crypto takes minutes. (If I'm being a bit simple it's because I've had this argument two other times this week. And both times it ended with them admitting I was right on the technical merits but a few days of extra speed weren't worth enough to justify it.)
The standard answer that I'm familiar with is that fiat is entirely controlled by national governments and they can and have totally fucked over currencies before, whether with asset seizing or with hyperinflation. Less of an issue in the West, very much an issue in much of the rest of the world.
In theory, Crypto gives you a currency that can't be a victim of inflation (though in practice it's treated by most as a speculative commodity instead).
In theory, crypto gives you an anonymous currency, so you can buy illegal things - which might well be medicine rather than recreational stuff, or so that you can protest against the government without them freezing all your bank accounts to starve you. In practice, most crypto currencies out there are not actually anonymous in practice, because it turns out that governments really like having a surveillance state and the USG is strong enough to enforce regs on crypto. (P.s. note that the good features of crypto here are basically the same as those possessed by physical cash, but also notice how much governments have been pushing society towards digital transactions and away from cash in hand, for exactly these reasons.)
Assuming it had a strong reason to do so, it seems like the US government has enough resources to just 51% attack any cryptocurrency that becomes a problem for it. So it's not clear to me that they really get you out from under government interference.
They don't need to do that. Nationalize the telcos[1], and reconfigure all the gateways to block, spy on, or modify Internet traffic however you want, until you've achieved whatever you like with the cryptocurrency. You can destroy it, hijack it[2], inflate it, deflate it, whatever you want.
If you want to evade government surveillance or control, the last thing you want to do is build in mission-critical reliance on a vast physical public infrastructure over which government already and inherently exerts great control.
What you probably need is some kind of crypto that can work via spread-spectrum radio on the 20 meter band. Then you're all set, unless the government starts prohibiting the sale and possession of shortwave radios I guess.
-----------------------
[1] You can probably omit step 1, actually, and just make a phone call to the CEOs of each business. They're not going to decline to do the guys who regulate their profit margins any little favor.
No, crypto's just stuck between two different poles.
The use case for Crypto is obvious; you don't trust the government or the banks and you'd prefer to set up a trustless way to do electronic money transfers. In this case, think Monero. When hackers demand a $100 million dollar ransomware payment from Acer, they ask for Monero. Even if you're not a criminal, and most users aren't, you can be extremely confident that Monero can't be traced, controlled, or inflated by the central banks you don't trust. This has the advantage of a clear use case and the disadvantage of basically being associated with criminals.
On the other end, you have Etherium, which is both too complicated for me and basically just a weird tech company. This has the disadvantage of not having any idea why people would use it and the advantage of being drowned in VC money. Unsurprisingly, this is how most people think of crypto now, because people like money.
To quote John Stokes, who wrote very well about the Tornado Cash controversy(1), "At some time, you really do gotta pick between "selling out to the Man" and "revolution". Everyone gets the original use case and the original purpose but people can't get rich off that.
I'm looking at a technical writing job where I need to create a Single Source of Truth from documentation where information has been copied and modified with multiple versions. I have an idea of how to do this (create a template, fill it with reliable information, and then offload all the conflicts into a 'conflicts in this topic' section below the template. Create an issue of these conflicts in Jira and then allocate time towards resolving them.)
What I'd like is some kindof authority to either show me a better way or else to help me justify the course I'm considering. All the writing is about 'why you should create a SSOT' and not how to manage the process itself.
This is pretty obviously in bad faith and worse in a way, it’s unoriginal.
Gore Vidal went down this road on live television over 50 years ago. Though Vidal did use the term crypto Nazi in reference to Bill Buckley instead of crypto fascist the schtick is made of the same crap.
Buckley’s response here was just as terrible, calling Vidal a ‘little queer’ and threatening to ‘sock him.’
This is a really bizarre deflection from the use of a mob to disrupt the peaceful transition of power by an authoritarian type strongman backed by the theocratically inclined religious people. That's fascism; the cryptofascism is in the people here all fumbling around trying to pretend otherwise.
The cryptofascist party would like to remind everybody that the goal of this place is to have interesting and enlightening conversations. The opportunity cost for being here is sitting on your porch, watching the leaves change and unironically enjoying everything pumpkin spice. If a discussion isn't informative or enlightening, life's too short to argue on the internet when the trees outside are so beautiful.
I genuinely don't think this is going anywhere. I'm not sure I'd call it a flame war but it certainly doesn't seem productive or enlightening. Instead it looks like a ton of bad internet arguments I've seen before.
I also, genuinely, enjoy watching the leaves change and pumpkin spice creamers in my coffee. These things are just obviously good; I could no more ironically enjoy them than I could ironically enjoy a cookie. It's a cookie.
This guy is using the same rhetorical technique as a particular recently banned individual who was coming from the other side of the political spectrum. Who knows? Maybe it’s the same guy and he just likes insulting people and causing a row.
"What if the lineup of people expressing the same mainstream and therefore extremely popular criticisms, NPC arguments we don't have to take seriously, is just the same guy" lmao
Haven’t heard a single one out of you yet. Not that I need to hear any: to me, racism is self-evidently stupid and bad. But I have heard quite a lot of ad-hominem invective out of you, which is not in line with community norms.
Going way out of your way to tell someone you don't think they have any good arguments against racism is in line with community norms? That's what you think?
It's only in your delusional worldview that I'm required to enter into the conversation against racism. I decline. I take the direction you steer the conversation as evidence that you want to have that conversation, which tells me that you want the conversation to happen. More people talking about race is how cryptofascism works.
Now you're advocating for me to be banned because I attacked you with my arguments, which you've ignored in favor of glowering at how I didn't make arguments against racism that I never committed to making. This is bad faith participation in a comically direct fashion, but we'll see if moderation is autistic--sorry, 'quokka'--enough to be fooled by your snivelling.
If people are idiots easily persuaded into performing the nazi disco dance party rhetorical moves, I want to believe that they are idiots. Call that invective if you like! I shall laugh at your petty word games and insist once again: if you proclaim yourself to be against racism and racists, let's hear your strongest arguments against the inclusion of racists in a community.
I remember reading a study where the authors wrote two identical papers about political violence, but they simply replaced "left wing violence" with "right wing violence" in the second. They then tried to get them published and tracked the results. Does anyone know about it ? I can't find it anymore.
Should you put your university grades on LinkedIn?
On the one hand, if you don't put them on I suspect viewers will think you are hiding something and may think that you are not competent. This is probably a particular concern for black students, given that viewers may make incorrect inferences about their grades based on statistical data.
On the other hand, if they are put on your profile it might seem like showing off (if they're really good), and indicative of a kind of insecurity that I'm the kind of person who needs to show off their grades. Also, it might make others that have worse grades feel bad about not having similar achievements.
If you're a recent graduate, putting your GPA on your resume is a generally accepted practice that won't make people think you are trying to brag. Sometimes, when you're looking for entry-level positions, grades can make a difference. I'd assume that holds for a LinkedIn profile if it's being used to look for entry-level jobs in a field you just got a degree in. Particularly if you went to a second- or third-tier school.
After a few years, it gets stale and people will want to know what you've actually been working on, so take down the GPA and start talking about your projects.
Don't post grades for each class you took; the people who care will ask for a transcript anyway.
I rarely need to see grades to know whether someone groks a field in which I'm competent to hire. Usually seeing their overall trajectory is enough for a first approximation, and then when I talk to the person I can figure it out very quickly.
What seems likely to be more useful is to add a short note about courses you've taken that one wouldn't ordinarily expect you to take as part of the degree, e.g. if you have a degree in computer programming but you've taken quantum mechanics and 3 years of Mandarin, that's maybe worth noting because it might make you stand out for some particular job.
I have never worked anywhere that required or even reviewed grades, outside of education. If you are trying to work in a field that is education-based, the expectation that you share your grades may be higher, but that can be achieved through official transcripts.
Nobody cares about your grades after you land your first job, even if they care at first.
I suggest not putting in anything except where you studied, whether or not you graduated, and your GPA if the system has provisions for it. And omit the GPA if you are already working in the industry and wouldn't be interested in an entry-level position. If a prospective employer really wants to know your grades, they can ask for a transcript as part of the application process.
Aren't those prediction markets easily manipulated?
1: Publicly place a large bet on your own trustworthiness through 2023
2: Through a sock puppet place a smaller bet on your committing fraud in 2024
3: Allow the first bet to shift odds against the second
4: On Jan 1 2024 after collecting your modest reward, commit fraud. Reap rewards of fraud. Then reap rewards from betting on fraud while presumed honest.
My hypothesis for why your subconscious wanted to put your thoughts on FTX on an open thread:
The criticisms of FTX and people saying "I told you so" will have to share space (on an incredibly slow loading page) with a bunch of self promotion, making it less forceful.
I suspect this idea was meant as gentle self-parody—it's basically a proposal to double down on the precise epistemic errors that led to this fiasco.
Investing in a risky new venture is *already* a prediction market on, inter alia, whether the founder will turn out to be a fraudster. Accordingly, the collapse of FTX is a reminder that whatever advantages markets have in theory, for questions like this you can’t always rely on the average of many opinions weighted by the amount of money each is willing and able to bet on being right. People with money to bet on their opinions about a niche topic have collective blind spots, ideological biases, and susceptibility to charisma and deceit just like any other group.
A better update, in my own ideologically biased opinion, would be towards the golden rule of wokeness: beware the wealthy and powerful, because in a dog-eat-dog world it’s nearly impossible to make it to the top by being kind and upstanding.
If this indeed self-parody, I personally couldn't distinguish it from the other things that rationalists write. Don't know wether that reflects badly on me or the rationalists.
I don't think you need to be especially suspicious towards wealthy people who got wealthy thanks to labor (e.g doctors), but perhaps should be a bit more suspicious of people who got rich thanks to speculation/networking.
Progressives often try to draw a distinction like that—a doctor whose independent practice has finally paid off their loans and then some after a long career is one thing, an MD pulling in a few million a year from pharmaceutical companies in speaking fees and expert witness honoraria deserves a different level of epistemic scrutiny. E.g. AOC likes to post about the difference between $1e6 (moderately suspicious "movie star" money) vs. "systems money" on the order of $1e9, which she argues can only be achieved through various kinds of ruthless exploitation. https://pbs.twimg.com/media/EnpfW3QUwAApRxQ?format=jpg&name=large
This is in general the epistemic virtue I think rationalists could learn from wokeness: attending not only to the first-order merits of an idea, but to which systems of power had to be appeased and whose interests were served in the process of generating the idea and its premises. Ironically the community is very comfortable with this analysis when the powerful ideologues being appeased are, like, faculty search committees and woke Reddit admins, but is excessively focused on these motes in woke eyes while ignoring the beams resulting from centuries of intellectual deference to wealth, whiteness, and other historical sources of power.
How convenient for AOC that having mere "millions" of dollars is okay but being a billionaire is highly suspicious and the fruits of systemic racism - or systemic mumble mumble anyway.
I have the strangest feeling that her total worth might be getting up to the million mark. For somebody working a low-paid job that pays hourly rates, a millionaire is just as suspicious as a billionaire is to a millionaire.
This site says it's "pants on fire" that AOC has a worth around a million, but that they quote the following makes me wonder about how purely truthful and not at all using totally legal workarounds to reduce tax liability it is:
"Rep. Alexandria Ocasio-Cortez's latest financial disclosure form showed assets of between $2,003 and $31,000.
Her liabilities were listed at between $15,001 and $50,000, indicating that she may have a negative net worth. "
What was it Mr. Micawber said? I think he would judge "Assets $31,000, Liabilities $50,000,, Net Result $19,000 in the hole" to be a good sign she should give up politics and find a paying job 😀
I guess you get a rationalist point for Googling your irrelevant ad hominem and admitting that it's not remotely true? But it's pretty disappointing that you immediately pivot from "AOC is maybe rich so her analysis of wealth and power is not worth engaging with" to "AOC is maybe poor so her analysis of wealth and power is not worth engaging with".
Why not respond to the actual claim she and I are making: in a world where many smart and talented people ruthlessly compete for wealth, often in clearly antisocial ways, we should have a high prior that the very few who get to the $1e9 range have done unethical things to get there?
> in a world where many smart and talented people ruthlessly compete for wealth, often in clearly antisocial ways, we should have a high prior that the very few who get to the $1e9 range have done unethical things to get there?
Motte: "high priors on theft"; bailey: "certainly a theft".
I am curious, what exactly are your priors on "if someone has X wealth (as a reasonable person would calculate it, not after all kinds of tax evasion), they made it unethically" for $1e6, $1e7, $1e8, and $1e9 respectively?
Sorry for getting back to you so late on this one, was doing other stuff and forgot.
I did have a couple of longer answers planned out, but since I am also watching the opening ceremony of the World Cup right now (dear lord please finish up and start with the actual football, nobody cares about bad pop), this is going to be fast and cheap.
(1) Rationalist points? Are those like Green Shield Stamps?
(2) AOC is a brand, a carefully crafted selling point. Ocasio-Cortez is about as grassroots as artificial turf, she was part of a kingmaker campaign and I do have to hand it to her, she got elected and re-elected, so congrats on that. But all the "Sandy from the block" stuff is PR image
(3) She's a slick career politician, and after pushing a bit too hard in her first year, she has now settled down to a career as (probably) reliably getting re-elected in the gentrified constituency she ran for, and will continue to issue hot takes to keep her name, and brand, alive and current in the media
(4) That's it, basically I don't take her any more seriously than any other politician who has found a niche and is exploiting it. I guess she could do a whole "turning up dressed in white to scream and cry outside Twitter HQ" in her Billionaire Protest, just like she did outside the car parking lot for the Illegal Immigrant Protest.
I think the information in the column I excerpted above reflects very poorly on Sam Bankman-Fried. His subjective intentions may have been quite benign. But, it appears that he was extremely reckless in the way he organized and ran FTX's business.
If you expect to have strangers entrust you with billions of dollars of their property, you must at the very least keep meticulous records of how much you received, who you received it from, and the conditions of receipt. You must also keep equally meticulous records of what you did with the property you received. etc. Those records ought to be able to be used to produce a high quality balance sheet at all times. The fact that they couldn't is telling.
Right now, I would say that SBF is in very deep legal trouble, that he is likely to be indicted, convicted, and jailed for committing fraud.
I understand that Robert Wright went on Bret Weinstein's YouTube channel (the Darkhorse Podcast) a couple of weeks or so ago, to debate Eric Weinstein's probably-crackpot theories/models and perhaps other things. Does anyone know whether/when Bret Weinstein will post this debate? I was really kind of looking forward to it. I see no signs of this showing up on Bret Weinstein's YouTube channel and am even wondering if Wright misspoke and he spoke directly to Eric instead, but searching "Robert Wright Eric Weinstein" isn't turning anything up either.
This is a small excerpt of a column on Bloomberg.com by Matt Levine, formerly an editor of Dealbreaker, an investment banker at Goldman Sachs, a mergers and acquisitions lawyer at Wachtell, Lipton, Rosen & Katz, and a clerk for the U.S. Court of Appeals for the 3rd Circuit. I have not included quotation marks but what follows is direct quote, but it does not include links or footnotes. It is not my opinion as I have no first hand knowledge of the facts:
... the balance sheet that Sam Bankman-Fried’s failed crypto exchange FTX.com sent to potential investors last week before filing for bankruptcy on Friday is very bad. It’s an Excel file full of the howling of ghosts and the shrieking of tortured souls. If you look too long at that spreadsheet, you will go insane. ...:
Sam Bankman-Fried’s main international FTX exchange held just $900mn in easily sellable assets against $9bn of liabilities the day before it collapsed into bankruptcy, according to investment materials seen by the Financial Times.
... And yet bad as all of this is, it can’t prepare you for the balance sheet itself, published by FT Alphaville, which is less a balance sheet and more a list of some tickers interspersed with hasty apologies. If you blithely add up the “liquid,” “less liquid” and “illiquid” assets, at their “deliverable” value as of Thursday, and subtract the liabilities, you do get a positive net equity of about $700 million. (Roughly $9.6 billion of assets versus $8.9 billion of liabilities.) But then there is the “Hidden, poorly internally labeled ‘fiat@’ account,” with a balance of negative $8 billion. [1] I don’t actually think that you’re supposed to subtract that number from net equity — though I do not know how this balance sheet is supposed to work! — but it doesn’t matter. If you try to calculate the equity of a balance sheet with an entry for HIDDEN POORLY INTERNALLY LABELED ACCOUNT, Microsoft Clippy will appear before you in the flesh, bloodshot and staggering, with a knife in his little paper-clip hand, saying “just what do you think you’re doing Dave?” You cannot apply ordinary arithmetic to numbers in a cell labeled “HIDDEN POORLY INTERNALLY LABELED ACCOUNT.” The result of adding or subtracting those numbers with ordinary numbers is not a number; it is prison. ...
For a minute, ignore this nightmare balance sheet, and think about what FTX’s balance sheet should be. ... But broadly speaking your balance sheet is still going to look roughly like:
Liabilities: Money customers gave you, which you owe to them;
Assets: Stuff you bought with that money.
And then the basic question is, how bad is the mismatch. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy. FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t! ...
If you think of the token as “more or less stock,” and you think of a crypto exchange as a securities broker-dealer, this is completely insane. If you go to an investment bank and say “lend me $1 billion, and I will post $2 billion of your stock as collateral,” you are messing with very dark magic and they will say no. The problem with this is that it is wrong-way risk. (It is also, at least sometimes, illegal.) If people start to worry about the investment bank’s financial health, its stock will go down, which means that its collateral will be less valuable, which means that its financial health will get worse, which means that its stock will go down, etc. It is a death spiral. ...
In round numbers, FTX’s Thursday desperation balance sheet shows about $8.9 billion of customer liabilities against assets with a value of roughly $19.6 billion before last week’s crash, and roughly $9.6 billion after the crash (as of Thursday, per FTX’s numbers). Of that $19.6 billion of assets back in the good times, some $14.4 billion was in more-or-less FTX-associated tokens (FTT, SRM, SOL, MAPS). Only about $5.2 billion of assets — against $8.9 billion of customer liabilities — was in more-or-less normal financial stuff. (And even that was mostly in illiquid venture investments; only about $1 billion was in liquid cash, stock and cryptocurrencies — and half of that was Robinhood stock.) After the run on FTX, the FTX-associated stuff, predictably, crashed. The Thursday balance sheet valued the FTT, SRM, SOL and MAPS holdings at a combined $4.3 billion, and that number is still way too high.
I am not saying that all of FTX’s assets were made up. That desperation balance sheet lists dollar and yen accounts, stablecoins, unaffiliated cryptocurrencies, equities, venture investments, etc., all things that were not created or controlled by FTX. [5] And that desperation balance sheet reflects FTX’s position after $5 billion of customer outflows last weekend; presumably FTX burned through its more liquid normal stuff (Bitcoin, dollars, etc.) to meet those withdrawals, so what was left was the weirdo cats and dogs. [6] Still it is striking that the balance sheet that FTX circulated to potential rescuers consisted mostly of stuff it made up. Its balance sheet consisted mostly of stuff it made up! Stuff it made up! You can’t do that! That’s not how balance sheets work! That’s not how anything works!
Oh, fine: It is how crypto works. ... It looked like a life-changing, world-altering business that would replace all the banks. It had a token, FTT (and SRM), with a multibillion-dollar market cap. You could even finance it, or FTX/Alameda could anyway: They could put FTT (and SRM) tokens in a box and get money out. (From customers.) They could take the dollars out and never, youher sens know, give the dollars back. They just got liquidated eventually. And those tokens, FTT and SRM, were sort of like real monetizable stuff in some senses. But in others, not.
But where did it go?
I tried, in the previous section, to capture the horrors of FTX’s balance sheet as it spiraled into bankruptcy. But, as I said, there is something important missing in that account. What’s missing is the money. What’s missing is that FTX had at some point something like $16 billion of customer money, but most of its assets turned out to be tokens that it made up. It did not pay $16 billion for those tokens, or even $1 billion, probably. [7] Money came in, but then when customers came to FTX and pried open the doors of the safe, all they found were cobwebs and Serum. Where did the money go?
I don’t know, but the leading story appears to be that FTX gave the money to Alameda, and Alameda lost it. I am not sure about the order of operations here. The most sensible explanation is that Alameda lost the money first — during the crypto-market meltdown of this spring and summer, when markets were crazy and Alameda spent money propping up other failing crypto firms — and then FTX transferred customer money to prop up Alameda. And Alameda never made the money back, and eventually everyone noticed that it was gone.
So Reuters reported last week:
At least $1 billion of customer funds have vanished from collapsed crypto exchange FTX, according to two people familiar with the matter.
The exchange's founder Sam Bankman-Fried secretly transferred $10 billion of customer funds from FTX to Bankman-Fried's trading company Alameda Research, the people told Reuters.
A large portion of that total has since disappeared, they said. ...
Oh, a place where the topic is Ukrainian FTX transactions, as opposed to some missile landing in Poland close to the Ukrainian border. From the first impression it very much looks like some unfortunate mistake ... but the level of nervousness in my social networks is considerable.
It's almost certainly a mistake, not clear whose. At least one of the missiles appears to have been an S-300 surface-to-air(ish) missile, which Ukraine uses against Russian cruise missiles and which Russia now uses against Ukrainian cities because they're running out of cruise missiles. It looks like the people who matter are taking the time to get the facts before making rash decisions, which is good.
There will almost certainly be a NATO Article 4 consultation. There will not be a war between NATO and Russia (or NATO and Ukraine) over this; we've tolerated much more egregious and deadly mistakes when it was reasonably clear they were mistakes, e.g. KAL-007 or Siberia Air flight 1812. And nobody believes that anybody had a motive to do this deliberately.
I just wanna remark how the currently 1416 comments on this thread expose how utterly crap substack is as a piece of technology. It takes my gaming PC ~20 seconds to load the top of the comment section, and I'm getting repeated multi-second freezes while I'm writing this comment.
I think the problem is the same as what we saw back on SSC: we are using software designed for publishing and light commenting as a high-volume discussion forum. It's not surprising the software is straining under the load. The DSL forum, whatever its other faults, is crisply responsive since it is running software actually designed for running discussion forums. The ACX Discord also works fine, for the same reason.
While there were limitations to the SSC (wordpress, probably?), such as the indentation eventually eating most of the screen real estate in long discussions, I think the comments were assembled server-side (by php?) and thus reasonably fast. This would also mean that you could just load an open thread and read through it offline (e.g. on a plane).
I would guess that substack uses lots of javascript (and js libraries) to dynamically fetch the comments and render them. This approach has some advantages: you can get new comments without refreshing the website. But unless you do it very well (which they don't), it also tends to slow everything down.
Yes of course. As I made clear multiple times, the accusation was that flows went in the other direction, Ukraine diverting US aid to FTX for crypto of dubious worth so FTX could donate back to the politicians who passed the aid bill.
But evidence of this direction occurring is lacking.
True but stop with the “signal-boosting” crap. I was seeing it mentioned in a lot of places so it seemed worth adding here as a possibility to be considered. It’s a frigging OPEN THREAD. I appreciate very much the people on this thread who gave reasons why the story was likely to be wrong but the people who instead suppressively implied that I should STFU should STFU.
That was always the wise decision. If you are into big risk chasing potential big rewards, cryptocurrency is a potential avenue with larger-than-normal risks and rewards.
If you're into weird alternate ways of buying things, cryptocurrency is also a way to do that (though the highly fluctuating values make it problematic to use for that purpose!).
What's the longest period until positive returns we can find for an investment? Can we find something that, for example, required fifty years of payments to get things working, but yay in the fifty-first year it began producing returns? I guess this would we particularly interesting if we found something that in the end turned out to be a good investment, despite the extremely long period of negative returns.
Here's an analysis that suggests a big nuclear power plant doesn't start returning a positive ROI until 13 years after the initial investment. It starts becoming more profitable than an equivalent gas-fired plant only after 18 years.
An individual stand of trees in a plantation forest will take roughly that long to grow before it can be cut down and sold. But of course that's more of a known quantity than the kind of thing you're thinking about (until some beetle comes along and kills them all) and you can put down x number of 30 year old trees as an asset on your balance sheet.
EVERYONE, want to understand what happened? IMO, this article by Matt Novak gives the best start for understanding FTX & Samuel Bankman-Fried. It also works as an introduction to cryptocoins at large. Then, to understand how people got bamboozled, from the inside, read the NYT article it corrects. The NYT writer has yet to get wise.
Please, #9's “how can I ever trust anybody again?” is the wrong question, with a misguided answer.
If people promote Wrong, no matter how trustworthy the people, the Wrong remains wrong. The original bitcoin whitepaper is Wrong. Specifically, it spoofs monetarism. Possibly intentional IMO.
Ask a better question. "How can I make sure I know the basics in a field before I commit to a position in it, let alone commit resources?"
The accusation is probably FALSE, but it isn’t NONSENSE, you just misunderstood it. The accusation is that some of the aid money the politicians sent to Ukraine was used to buy crypto from FTX rather than spent on actual, you know, AID, and FTX then made huge amounts of political contributions.
The rebuttal is that all the transactions between Ukraine and FTX were in the other direction from that, which is fair. But the accusation was a coherent story.
In general, the whole thing with EA seems like similar to many other things that appear ridiculous about rationalism. You take a simple idea that is eminently sensible when you put it in a few words. Charitable giving is often inefficient - what if we start evaluating charitable giving by how much bang for the buck you get for it? Very sensible!
Then you put it in a crowd of people with a few well-known features, like love of Big Ideas, addiction to novelty of new ideas or revisionist takes on existing ones, almost comical belief in the power of Reason in comparison to tradition/law/taught ethics/societal approval/etc (right to the name of the crowd), and a tendency for constant iteration -and soon the original idea starts mutating to new forms, so that soon you're giving all your money to the Computer God, or becoming utter caricatures of utilitarianism straight from the philosophical debates ongoing for decades and centuries or banking on gee-whiz businesses as long as they're aligned with the cause, or just opening yourself up to all manner of grifters and fast talkers in general.
The same applies to polyamory, or nootropics, crypto or all manner of political ideologies beloved by rationalists - not that the simple idea behind them is necessarily good to begin with, but even then it just all seems to get worse and weirder, and doing so quite fast.
What one seems to need is stopgaps, intellectual roadbumps - but even then, what would these be, who would set them, and how would you take care the movement doesn't just barge through them with the power of Reason, like with everything else?
Attention to detail has a factor of diminishing returns. The problem isn't with rationalism 'appearing ridiculous', it's that limited life and attention spans we're bound to have as a species make us gravitate towards practicality above precision and rationalism is the reverse - it's precision above practicality for its own sake and when iterated tends to create a bunch of ad absurdum scenarios, it alienates. Banging on philosophy is viewed as anti-intellectualism, even when all of its big questions can likely only be answered through science, it isn't going to generate new answers that could change much of anything but rationalism tries just that - often for smaller, more practical fields of inquiry.
So rationalism now is likely taking up the same spot that mainstream philosophy as an area of inquiry took up before becoming stagnant in modernity, but ends up eminently more mockable because it engages with smaller more commonly accessible, more impactful areas of knowledge than the 'big questions' and thus each failure of its inquiry becomes to status boost off of.
The issue you describe isn't a failure of rationalism, it's more of an attack angle on the generalist enthusiasts that gets used by the mainstream specialists in various areas seeing them as a threat as it disregards the PR/politics of a given community. E.g. EA makes wasteful signaling more apparent. But all principled alternative inquiry tends to generate new ideas and new data and is ultimately good for society as a whole.
Well, traditionally your roadbumps are experience in the real world, and the cautious conservatism to which it tends to give rise. Why didn't Warren Buffett lose a shit-ton of money from the FTX crash? Why didn't I? Because we're old. We've seen this movie before, many times, with different actors and different effects, but the same script and, alas, the same ending.
Experience isn't as eloquently verbal as Reason. Often Experience get tongue-tied, can't even explain *why* it feels Reason has reasoned itself into absurdity, found conclusions that will be savagely crushed when they come into contact with objective reality. Heck, if I ask the plumber why he does this-and-such while fixing my drains, I don't expect -- and don't get -- any treatise on hydrodynamics. Sometimes he can't even really explain it in terms we have in common at all. But I'd be a fool to substitute my own reasoning, however brilliant and apparently logical, for his experience.
That's not to say Reason isn't Queen, isn't the most profoundly useful intellectual tool we have, to be honored above nearly all else. But Experience is King, and like all partnerships this one works best when there is mutual respect and cooperation between the principals. Trying to use only one or the other just gets you mysticism, stagnation, or epic disaster.
Why didn't I lose any money from the FTX crash? Because I am too stupid and lazy to get into crypto savings, though I really liked Allen Farrington's writing. Being stupid and lazy has its upsides. I'd still buy some bitcoin if any of my kids would manage it for me.
A better way to put that is that you got lucky, in that you were lazy *before* you invested in FTX instead of *after*. If you got in but got out in time, you made bank.
And sure, good luck beats any amount of experience or reason, every time. That's why nobody ever beats James Bond at baccarat, despite his spending way more time eying the zaftig onlookers' cleavage than counting cards: because James Bond has infinite good luck. No amount of skill can beat that.
I heard there was a new blockchain tech that allowed people to invest some of their natural good luck (which is "mined" by a powerful GPU via a P2C2E) and get a return of 8% per annum. I'm going to sign up right away!
Experience by definition takes time to acquire and is largely non-transferable, but Reason doesn't necessarily have either of those constraints. Aristotle's Nichomachean Ethics has a line suggesting that it was a matter of general agreement at the time that while the young were never masters of "practical wisdom", they often could be exceptional at geometry and math, suggesting a long history of people agreeing with that observation. One might say Reason can travel halfway around the world before Experience can get its boots on.
True. And it's worth noting that hard experience can easily make you timid. If we didn't have the young egging us on to try stuff that on first appearance sounds lunatic, we'd never get anywhere and still be hunting antelope with sharpened sticks. If we didn't have the old saying now hold on just a God-damned minute, this is the same dumbass idea we tried in the winter of '49, the kids would blow civilization into smithereens. Like a lot of stuff, balance is the key, I think.
It's ultimately because of Reason that we're no longer freezing in mud huts, but live in comfort and are able to instantaneously communicate across the whole world for discussions on abstract topics like this one. You can't deny that Reason is pretty sexy and it's unsurprising that people tend to fall in love with it headlong.
Stopgaps and intellectual roadbumps develop when people run headfirst into debacles, say, like this one, a learning process that nobody yet found a way to short-circuit (and how would they do it otherwise anyway, if not by trying to apply Reason?)
I think you're missing the distinction between reason and Reason, the idea that pure intelligence and independent thought can come to better conclusions than other forms of knowledge (especially tradition and experience, which Rationalists explicitly reject).
To me, the Rationalists are often trying to reinvent the wheel while rejecting any existing knowledge about what wheels do and why. Sometimes you come up with a neat and novel idea, but sometimes you reinvent a dead end that society rightfully tossed a long time ago and end up down a bad path you can't get back out of.
It's like you didn't actually read the comment you're responding to.
The comment criticises rationalism and specifically gives many examples of alternatives.
< ..a crowd of people with a(n)... almost comical belief in the power of Reason in comparison to tradition/law/taught ethics/societal approval/etc >
So maybe the problem is that unadulterated 'on-steroids' reason leads to all sorts of crazy towns, repugnant conclusions and people with moral vacuums causing great suffering in the world. And maybe, just maybe, some of those other things might be used to temper the pernicious results of relying on pure reason.
And your response is -
< 'and how would they do it otherwise anyway, if not by trying to apply Reason? <
Mud huts are very traditional, and yet somehow pretty much nobody is enthusiastic about returning there.
A big tradition of the modern civilization is to discard outdated traditions, and that implies taking risks. You don't get to have progress without making mistakes.
>And your response is
No, my response was that running headfirst into debacles is inevitable, as that is the only way that people eventually learn in practice. Rationalists do think that it's possible to use Reason to avoid them, and I agree that it's naive.
It's not Reason that is the problem, it's the bubble that the Reasonable are all living and working and socialising and networking and going to conferences and writing well-received little think-pieces in.
Congratulations, EA has now reached its "how many angels can fit on the head of a pin?" stage. (Which was never an actual proposition but we'll get into that later). Scholasticism *did* ossify into a system more and more removed from any practicality and more and more interested in logic-chopping (I will grant this much to Luther and the Reformers) because it got into its own little bubble of jargon and specialisation and 'the normies are too IQ 90 to grok this' attitudes.
If one of the results of this entire mess is that the Rationalists start to puncture their bubble, all to the best. Scholasticism needed the boot up the backside to flower again.
Now, the "angels on the head of a pin" thing. It's a pop culture reference and like many pop culture references has forgotten its original roots in Protestant polemics. I had a vague sense that it was resurrected/re-popularised by Isaac D'Israeli in one of his volumes of essays:
"The reader desirous of being merry with Aquinas's angels may find them in Martinus Scriblerus, in Ch. VII. who inquires if angels pass from one extreme to another without going through the middle? And if angels know things more clearly in a morning? How many angels can dance on the point of a very fine needle, without jostling one another?"
The question is not itself unreasonable, as Dorothy Sayers says, but it has been used as a symbol of how abstruse and fruitless such over-logical debates are. Rationalism take heed?
"Dorothy L. Sayers argued that the question was "simply a debating exercise" and that the answer "usually adjudged correct" was stated as, "Angels are pure intelligences, not material, but limited, so that they have location in space, but not extension." Sayers compares the question to that of how many people's thoughts can be concentrated upon a particular pin at the same time. She concludes that infinitely many angels can be located on the head of a pin, since they do not occupy any space there".
I'm impressed you've read Sayers on theology, although not especially surprised. But the correct answer seems a trifle insipid, I liked better the cheeky one given by Dejah Thoris Burroughs (if memory serves) which (paraphrased) was: "Easy! Let A be the area of the head of the pin, and let B be the area of an angel's ass. The desired number is A/B. Carrying out the math is left as an exercise for the reader."
I've read speculation that the internet itself may have developed a form of self awareness and perhaps could be considered a collective intelligence.. maybe I'm misremembering and it was my own speculation.
Nonetheless if it is anything like this thread, it is hopelessly in disagreement with itself and probably, as a whole, risk averse which means the internet consciousness is not an EA?
Wow, how many logical/factual errors can one cram into one post? Based on this post I have to say it is doubtful the internet-consciousness is particularly intelligent!
Is the whole 'internet consciousness' thing, just a long way around to saying "this post is wrong in many ways, but I'm not actually going to list any of them or make actual arguments about them?"
The point of my question was to say your comment came across as that sort of comment. If that wasn't your intention, and you really want to talk about internet consciousness, then including a sentence like: "Wow, how many logical/factual errors can one cram into one post?" is probably unhelpful.
Yeah, I think you're right. I really need to speak more precisely or develop a high tolerance for the mishaps of miscommunicating. I choose the former. I also have noticed that a lot of the posters here have a great, haiku-like concision and can communicate a lot of meaning in just a few sentences. I tend to go on and on!
But if you have questions on any of the ideas, please ask and I'll answer as best I can! Personally I think it interesting that consciousness might be an emergent property of a sufficiently large, sufficiently interconnected network. Is the internet at or near that level? Dunno!
Agree that exchange of information doesn't imply language or consciousness. Nor does mere complexity. So what does? How do we prove that we ourselves are conscious? Pass Turing tests?
Our complex network of interconnected individually unconscious brain cells are likely its source, but where is it? Is it subdividable.? Big questions! The ghost in the machine. I've just recently started looking at philosophy again after a forty year absence and it looks like most of the open questions then are still open, but the birth and growth of the internet, social media, computer science, etc. have sure given us new lenses to look at old problems through!
I am under the impression that some people in this community may have trusted Tether based significantly on assurances from FTX/Alameda individuals that they trust Tether.
If so, it seems prudent to disregard such support, and reassess your trust of Tether without regard to any statements from FTX/Alameda sources.
Sure, but people here may have been trusting Tether despite its suspicious nature because they trusted FTX/Alameda who had a vested interest in Tether.
we have a comment from Scott himself which takes a more mixed but IMHO still sounds like he was getting his info from FTX/Alameda, albeit someone there who cared enough to present a nuanced view that wouldn't encourage him to make huge bets on Tether either way.
My general point is that FTX/Alameda assurances about Tether should obviously no longer be trusted and people who used to trust FTX/Alameda should reassess accordingly. My examples show people here may have used FTX/Alameda figures as trusted sources of information about crypto due to overlap between FTX/Alameda and ACX readership.
Most people don't know what FTX is. Most people have no idea who SBF is. Most people have never heard of EA.
It it possible you were gaslighted into thinking this was the future of humanity, and now that the con artist has been outed the confrontation with reality feels a bit unbearable.
Most rationalists feel like they are way too superior to fall for the Nigerian prince scam, but that only makes them an easier pray for the slightly more advanced scam.
It looks like Ukraine made a lot of transactions with FTX but they are claiming it was only converting crypto to cash and not the other direction. I have not seen their claim rebutted so for now it’s a sufficient explanation, as it’s the other direction that would be involved in any aid-laundering scheme.
FWIW, when the war broke out I tried to donate to the Ukraine military directly via their national bank, using my credit card. I did this because I have a strong belief that national territory integrity is paramount to many good things, and allowing a world where borders are subject to invasion sets precedence for much bad and very little good.
Anyway, I tried to donate to the national bank and my credit card immediately fraud flagged it and stopped the transaction. Why? Because donating to foreign national banks is unusual, I guess? Anyway, moving money internationally has lots of barriers and costs but moving crypt has essentially none so I sent the money I wished to donate via crypto to Ukraine and that worked.
TL:DR; I can easily see a world where Ukraine had a bunch of crypto they needed to convert to useful currency and did so via an exchange (FTX).
You mean, the allegation is that "Ukraine" as a country stole money from itself? How is that supposed to work?
Presumably some Ukrainian officials might have stolen money "from Ukraine" via FTX somehow; which honestly sounds plausible, but what is the source for this Ukraine-FTX connection?
Its called a kickback. You know, sending a little money back to the politician that got you the big public contract. I'm not saying it happened in this case, but its not a nonsensical idea.
Yea, that sounds like coherent story, but pretty unlikely imho.
Ukrainian officials caught stealing aid, on the other hand, is something that imho is very likely to happen someday. After all, Ukraine is not exactly Switzerland, and it is getting huge sums of money; surely someone is stealing something. That is not a reason to stop helping them, to be clear
That's part of the problem - FTX was a big exchange so now anybody who had any dealings with them is going to be looked at as possibly suspect, either they got conned or they were in on the con.
Great way to shred everyone's reputation, Sam and company!
"some people have asked if effective altruism approves of doing unethical things to make money"
Curious why you use the broad formulation "doing unethical things", when you seem to be talking specifically about committing criminal fraud. When it comes to more general immorality, e.g. working for a company that significantly profits from animal exploitation, marketing unhealthy foods to kids, knowingly encouraging innumeracy to make a product or service seem more useful, is there really such a strong consensus in EA? Presumably the reasoning goes: your individual participation in ethically problematic markets only marginally increases the harm (someone else would do it anyway and on average either do a slightly worse job or need to be paid a slight bit more), whereas your donation creates an absolute benefit.
I got clued in quite early to what SBF/Alameda were likely up to, but I had the benefit of having access to some perspicacious people on Crypto Twitter.
Prompted by the below discussion, I'm beginning to wonder whether I might be aphantasic as well.
At first I thought "of course not, I can visualize things just like seeing them", but when I actually try, I can only see a tiny little bit or have a sort of vague outline or impression of an image. No matter how hard I try, I can't visualize a full image, even a small one.
For example, at one point I imagined someone drawing and pointing a sword, and I could clearly see the shape of the sword moving around, but I couldn't see the person holding it at all!
Edit: On the other hand, I definitely see things while dreaming.
Re: due diligence: all it would have taken is to read the companies' balance sheets.
That's what Binance did during the day or two they were thinking about acquiring the failed companies, and that's why they walked away: they saw how much larger their debts were than their assets.
Lessons for EA-backed charities:
- Insist that donations go through transparent, trustworthy, solvent evaluators like GiveWell.
- Have these evaluators run regular financial audits of the largest donors.
Binance and Changpeng Zhao are really the winners here, and I'm not so sure I'd trust Binance either, they seem to have had a rocky patch or two.
But whether he intended it, or just seized the opportunity, Zhao has triumphed over his rival Bankman-Fried and won their feud. You have to give the guy credit for knowing when the optimum moment to jump was. "We're going to pull your chestnuts out of the fire - oops we had a look at your balance sheet and no way" was a wonderful way of putting the kibosh on FTX.
Why would one expect the efficient-market hypothesis to hold, even approximately, for crypto?
Seems to me that a key ingredient to the EMH is evolution through natural selection: actors who are better at accurately pricing assets get richer at the expense of actors who are worse at it, and those who misprice assets run out of money and so lose their ability to distort the market.
But the big question with crypto is: is the entire industry going to suffer a catastrophic crash from which it will never come close to recovering? Even if it obviously is, we wouldn't expect there to yet have been any evolutionary pressure against market participants who fail to understand this.
An analogy: plenty of smart traders believe in Christianity, even though Christianity is (IMO) obviously nonsense. This is because there is no feedback system by which wrongly believing in Christianity is punished (I guess you could say the punishment is that they waste time ineffectually praying but that's very weak feedback). The EMH clearly doesn't apply to the hypothesis "traders can accurately divine how likely it is that Christianity is true" so why should it apply to the hypothesis "traders can accurately divine how likely it is that the crypto industry will soon crumble almost completely"?
Crypto might eventually settle down as a genuinely reliable unit of currency, but I think that's still about five to ten years away. Right now this is still 19th century "everyone sets up their own bank and/or investment firm and a lot of them crash and take all your money with them" territory.
Yeah. I’d love to see a currency not subject to the failures of fiat currency. But you’ll recognize that when the notable feature is that the bitcoin price of a loaf of bread is the same next year as it was last year. You don’t get filthy rich putting your money into a stable currency, you just protect yourself from the ravages of a fiat currency.
Yes! I don't think it's supposed to settle down, it's a reversion to 19th century decentralized banking because the central bankers are too openly crooked and incompetent.
That would be the "strong" version of the EMH. The "weak" version says that while it is possible to outperform the market through some combination of skill and hard work, there is no free money lying on the ground -- if there was something as simple as "every Wednesday afternoon, all tech company stocks are structurally undervalued" then people would notice, people would start buying tech stocks on Wednesdays, and pretty soon the observation would not be true anymore.
I don't think many people believe the EMH is literally true, though - e.g. it's definitely the case that if you phone up JP Morgan and ask them for a bid-offer spread on a particular asset, with nonzero probability they will make a mistake and you will be able to make some money off them. The EMH is useful as an approximation.
What I'm arguing is that there is no particular reason to expect that market prices in crypto are anywhere near rational, because there's no reason to believe that the market is accurately pricing in the risk of the permanent collapse of the industry.
Wikipedia says "The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information". To me this is equivalent to "prices are rational".
From what I've read about the EMH, that is not what Fama meant by it. I wouldn't trust Wikipedia on this concept. A popularization of the EMH is not the EMH.
What would happen if a fully rational person who was aware of their own rationality believed your version of the EMH? They would reason thus: "Suppose that there exists an irrational market price. Since it is irrational, there must be some logical argument that I, a rational person, could follow to deduce that it is wrong. But I would then know that it is irrational, violating the EMH. This would be a contradiction, therefore all market prices must be rational." At this point the rational person has proved that all prices are rational, which is the thing your version of the EMH said could never happen.
I tried writing a story in the voice of a near-future chatbot: maybe what GPT-7 could sound like. It seems to me like the only way we've figured out to make large language models work is through attention-only transformer models, which myopically focus on next-token prediction. This means they can keep getting better at finding superficial associations, reaching for digressions, doing wordplay, switching between levels of "meta", etc, but won't necessarily cohere into maintaining logical throughlines well (especially if getting superhuman at next-word prediction starts pulling them away from human-legible sustained focus on any one topic).
This seems like it could pose a serious problem, given that the only way we've figured out how to produce (weak) AGIs is through these large language models. In other words, if you want an artificial agent who can perform tasks they weren't trained for, your only good option these days is asking GPT to write a scene in which an agent performs that task, and hoping it writes a scene where that agent is sincerely good at what you want, instead of being obviously bad or superficially good. (Ironically, GPT isn't best thought of as an agent, even though it produces agents and environments, because it isn't optimizing any reward function so much as applying an action principle of sorts, like physics; see "Simulators" by Janus at LessWrong for more). It may seem surprising that the best way we have to make any given character (or setting) is to make one general-purpose author, but it makes sense that throwing absurdly superhuman amounts of text data at a copycat would work well before we have better semantic / algorithmic understandings of our intelligence, because the copycat can then combine any pieces of this to propagate your prompts, and combinatorial growth is much faster even than exponential. If these "simulators" keep giving us access to stronger AGIs without improving their consistency over time, we don't have to worry so much about paperclip maximizers or instrumental convergence, but rather about our dumb chaotic human fads (wokeism, Qanon, etc) getting ever more speed and leverage.
I also tried keeping to bizarre compulsive rules while writing this--mostly not ending one word with the sound which begins the next word, mostly not allowing one sentence to contain the same word multiple times, etc--because I think we'll see strange patterns like that start cropping up (much as RLHF has led popular GPT add-ons to "mode collapse" into weirdly obsessive tics). As I practiced this, I felt like I was noticing discrete parts of myself notice these sorts of things much more, which fed that noticement further, until it blotted out other concerns like readability or plot, until even the prospect of breaking these made-up arbitrary taboos felt agonizing; I imagine this is sort of what inner misalignment "feels" like. Maybe that's also sort of like what Scott mentioned recently with regard to cultivating "split personalities." Anyway:
> I hope the investigation finds some reasonable explanation, like that they were doing so many stimulants
One source suggests SBF was on EMSAM / Selegiline (twitter.com/AutismCapital/status/1592237980458323969), which causes pathological gambling, compulsive buying, compulsive sexual behavior, and binge or compulsive eating. Ticks all the boxes.
Honestly, I'm starting to feel like one of those old wives who was clucking away in the background about "don't do drugs and don't do weird drugs that you think make you smarter than anyone" and of course the kids ignore all that advice and then the old biddies get to say "I told you so".
The impression I am getting is that the American medical system is banjaxed (but then, aren't they all?) and if you know how to game the system, doctors will write you out prescriptions for all kinds of medication on the grounds that you need it to do well in school/work/learn how to tie my own shoelaces. If you're too honest/poor/lower class, you get written off as a drug seeker or not having that problem, so you can't access all the legal stimulants middle-class college kids can.
I don't know if that is a true picture, but it's the one I'm getting from all these reports.
When I read the linked Eleizer article about the ends not justifying the means, I was surprised to find that nobody in the comments mentioned Original Sin, since he in many ways recreated the idea there. Like in this quote:
"But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe."
That rings to me of a hundred sermons I've heard on humankind's sinful nature: that we are corrupted beings who when following their own desires and reasoning inevitably go wrong. It reminds me of Paul, writing:
"For I have the desire to do what is good, but I cannot carry it out. For I do not do the good I want to do, but the evil I do not want to do—this I keep on doing. Now if I do what I do not want to do, it is no longer I who do it, but it is sin living in me that does it. So I find this law at work: Although I want to do good, evil is right there with me. For in my inner being I delight in God’s law; but I see another law at work in me, waging war against the law of my mind and making me a prisoner of the law of sin at work within me."
Of course there's a big difference between Eliezer's conception and the classical Judeo-Christian one: namely, Eliezer believes that a perfect intelligence without corruption would be able to act out utilitarian consequentialism accurately, and that would be the right thing to do. On the other hand, is this actually so different from the Judeo-Christian conception? God does a lot of things that would be considered wrong for humans to do: Christians tend to justify it on non-utilitarian grounds (ie, God can kill people because we all are His rightful property in some sense, or something similar) but you could also justify them by Eliezer's criteria: as an non-corrupted superintelligence, perhaps God can make those kind of utilitarian decisions that we corrupted and sinful man cannot. He can decide to wipe out all of humanity except one family in a flood, because he can calculate the utils and knows (with the certainty of an omniscient superintelligence) that this produces the best result long term, that the pre-Deluge population is the one man on the trolley tracks that needs to die so that the five men can live. Certainly Leibniz's idea that this is the best of all possible worlds rests firmly on that same justification: that all the bad things caused or allowed by God are justified in the utilitarian calculus, because all alternate worlds would be worse.
I don't know if I buy all that, but it surprised me how rationalists find themselves re-inventing the wheel in some cases. More power to them, better to re-invent it then have no wheels at all.
But God is not the guy deciding whether to divert the trolley or not. God can poof the entire trolley out of existence, if he chooses. Given that he is not only omniscient but also omnipotent, if he STILL chooses to kill the one person to save five, he is being unambiguously immoral--regardless of whether you use utilitarianism or deontology as your moral framework.
Things get complicated when you bring things out to God's scale, so let me try to cliff notes Leibniz's Best of All Worlds thesis:
Clearly, there are bad things in the world: like trolleys crushing people to death. Clearly, if God is omnipotent then he could poof these bad things out of existence. That they exist at all means that God wants them to exist, inasmuch as He at minimum allows them to exist and, more importantly, created the universe in the first place.
If God is omnibenevolent, then this leads to a problem: why would an omnibenevolent God allow bad things to exist when He has the power to make them not exist?
Liebniz's answer comes from God's other famous omni: omniscience. God knows all, including the exact results of any actions He takes and how they will change the future going out to infinity. We may look at our world, where trolleys kill with impunity, and ask ourselves "Why doesn't God just poof the trolley out of existence before it crushes anyone?" Yet, we can't predict what the effects would be if we lived in a world where things poofed out of existence any time they would harm someone. Would that world be a better world than our own? What would the long term effects be? We don't know. God does know. So, if trolleys remain unpoofed, and if God is omnipotent, omnibenevolent, and omniscient, then it logically follows that a world where trolley's poofed before crushing people would be, on the whole, worse than the world we currently live in. Indeed, it follows that our universe is the best possible world there could be, since God would have created a different universe if it was better.
A natural rejoinder would be "If God is omnipotent, then he could poof the trolleys and also use his power to make that world better." However, this is a flawed understanding of omnipotence. Omnipotence means that God can do anything that it is possible to do, but some things are logically impossible. God can't make someone a married bachelor, for instance: the statement is incoherent. As Lewis wrote in *The Problem of Pain*:
>"His Omnipotence means power to do all that is intrinsically possible, not to do the intrinsically impossible. You may attribute miracles to Him, but not nonsense. This is no limit to His power. If you choose to say “God can give a creature free-will and at the same time withhold free-will from it,” you have not succeeded in saying anything about God: meaningless combinations of words do not suddenly acquire meaning simply because we prefix to them the two other words “God can”. It remains true that all things are possible with God: the intrinsic impossibilities are not things but nonentities. It is no more possible for God than for the weakest of His creatures to carry out both of two mutually exclusive alternatives; not because His power meets an obstacle, but because nonsense remains nonsense even when we talk it about God."
Now it would be quite possible for God to have created a universe without pain or suffering: He could have created a dead universe with no living things in it, or created no universe at all. So if we live in a universe with suffering it follows that God's goal is not to minimize suffering. There is something He wants that is worth suffering existing to get. Christians generally believe that this something is us: free willed intelligences capable of reason, love, etc. Though perhaps He wants something else even more, and undoubtedly He wants many things that we don't know. Liebniz would hold that this is the best possible world in which God can achieve all of His objectives. Because Christians believe that we are made in God's image, a universe where He achieves all his objectives is also the best possible universe for ourselves: at a fundamental level we were created to share the same utility function, as Eliezer might say.
So, to put it shortly, the metaphorical trolley in God's case is not whether to kill one man to save five, but whether to create a universe with humans in it and also suffering, or to create a universe with no humans and no suffering (or not create a universe at all).
Assuming you believe in heaven, a realm where trolleys cause no suffering, and there is no logical contradiction, you should account for why an omnipotent god prefers to maintain this trolley filled world rather than immediately set the world to the heavenly state.
Presumabely the only way to get heaven (or at least a heaven with people in it) is to first have trollies.
More specifically, in Christian theology there is a "place" called heaven where (there is some debate on this) the dearly departed exist with God in a blissful state: however, those dearly departed are waiting for Judgement Day. Christians believe that when Jesus returns he will overturn the world as it currently is, destroying it. Then comes the Judgement where God will judge both the quick (living) and the dead, separating the damned from the saints. Then the damned will be condemned to Hell (some say annihilated, most don't) while the saints will inherit the New Earth: the world recreated to be a paradise empty of trollies, where they shall live forever.
This matters! If Paradise is found when the world as we know it is destroyed and replaced with a better one, then it may very well be that this bad world we live in is a temporary necessity, required to create the conditions necessary for Paradise to exist at all. It would mean that the only way you can get free willed intelligences that are aligned with God's values to the point where they could exist in Paradise without ruining it, forever, is by first putting them through the trails of this world, and weeding out the once who are not willing or able to exist in Paradise without ruining it. Which is a gross simplification of the Christian idea of Atonement, Redemption, Damnation, etc. But I only have so much space.
I'm familiar with that objection, but I don't buy it at all. It seems like handwaving away a fatal objection to the God hypothesis. "Oh, the hypothesis is flatly contradicted by the data? Well, I'm sure there's some explanation somewhere that rescues the hypothesis. No, I can't find it. No, nobody else has found it either, but trust me, the Holocaust was actually a good thing!" Here, I agree with Lewis: nonsense remains nonsense even when we talk it about God, and I consider it nonsense to suggest that the Holocaust is good and that the world would not be better off if God had prevented it.
The idea that this world is the "best of all worlds" runs into two other severe problems. First, even we, puny humans, have managed to drastically improve the world with things like antibiotics, sewage systems, democracy, and electricity. If we can do it, how come God couldn't? Second:
"Now it would be quite possible for God to have created a universe without pain or suffering: He could have created a dead universe with no living things in it, or created no universe at all. "
...is contradicted by mainstream Christian theology itself, which asserts that God *did* create a universe that has living beings, but is nevertheless without pain or suffering. It's called "heaven". So if God did it once, why couldn't he have done it twice?
I'm pretty sure God uses neither, and that whatever goal he's pursuing is independent of simple human concepts like utility. It seems silly to even apply moral analysis to an omniscient omnipotent singular being. What're you going to do, tell the omniscient entity that it should know better?
1. "Beyond human comprehension" != "no relation to human values". If you don't like God's answer to the Trolley Problem, uh, settle down. Maybe He understands something you don't. God plays 4-d chess.
2. Giving an answer to the Trolley Problem that you happen not to like is a far cry from being a Lovecraftian Horror.
3. Even if God really IS a Lovecraftian Horror, what are you going to do about it anyway? Shame it? Omnipotent is Omnipotent. Call it God or call it Azathoth, you'd better get to worshipping either way.
You didn't say "maybe God is more moral than us" or "maybe we'd understand the morality of God's decisions if we had all the facts," you said morality *doesn't even apply* to God. You specifically ruled out both utilitarianism (God is good because his decisions lead to the best outcomes) and deontology (God is good because he follows moral principles). If neither of those is true, that leaves very little connection between God's values and humanity's.
(I'm not even sure what you mean by "a different answer to the trolley problem than you" because utilitarians and deontologists give exactly opposite answers to it.)
Your stance strikes me as a complete abdication of responsibility - morality is simply what God commands, no matter what horrible things he commands you to do. And sure, maybe that decision helps you avoid getting eaten by Azathoth, but I wouldn't call it "morality."
> If neither of those is true, that leaves very little connection between God's values and humanity's.
The third option you're leaving out is "God defines morality." It's then logically inconsistent to accuse God of being immoral. If He appears immoral to you, that's only because your understanding of morality is flawed.
>I'm not even sure what you mean by "a different answer to the trolley problem than you"
Less euphemistically that meant "action God could take that you would presume to criticize on moral grounds."
>morality is simply what God commands, no matter what horrible things he commands you to do
That's correct. If you presume the existence of God (which I don't, FWIW) then I don't see how one can arrive at any other conclusion. The Christian tradition certainly regards God as defining morality. If you admit the concept of Heaven as representing infinite utility, then you can pretty easily derive morality as being whatever it takes to get there, i.e. following God's will. And if God really turned out to be Azathoth then the only meaningful option available to us would be trying to not get eaten. In that case I suspect you _would_ call it morality.
Christians would hold that God's values are human values: or rather that human values are God's. That's kind of what the whole "made in His image" thing means.
Christians might say that, but Wanda was specifically saying the opposite - that God's values are not the same as human morality and we should not expect them to be.
I didn't say that at all. My point was that trying to apply human morality to God is a category error, like asking for the temperature of a single particle (which, if you're not a physics person, isn't a well-formed concept: temperature is only defined over ensembles).
I really need to understand why moving the substrate of intelligence from humans to machines gets rid of these problems. I know a dumb reason why people would assume it would but I do not understand the strong case.
I think Eliezer's point isn't that machines will naturally not have the same problems, but that they potentially could. He specifically writes that "in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree." I'd put the emphasis in that hypothetical more on the "lacking any inbuilt tendency to be corrupted by power" and less on the "AI".
The point being, human minds are "corrupted hardware" and only intelligences lacking corruption could be trusted to do the utilitarian calculation accurately. We humans do better by following deontological style rules that have been developed over trial and error for long periods of time.
True! In Eliezer's hypothetical these AIs all naturally come to the same moral conclusions since they are non-corrupted intelligences: if such a scenario existed, you could assume any individual AI would consent to dying since all would agree on what the right action is.
How do you judge "worthy of personhood"? You can't judge it on "any entity willing to commit murder is not worthy" because your perfect AI is willing to kill.
I agree: it's probably best to take his hypothetical and strip the "A" off of "AI" altogether: in his view any non-corrupt Intelligence could be trusted to come to the correct utilitarian conclusion.
If the AI is based on us, then it would inherit our prejudices. I don’t think there’s a single golden variable to be maximized that we can just code into it other than create a narratively satisfying world that feels authentic and that seems so nebulous that I don’t know if humans have any possible solution we could all accept.
If it’s not based on us… it has to evolve from external pressures but so did we. We always talk about ourselves like we are artificial but the whole of history had a hand in shaping us and I don’t see how that would produce a perfect artificial intelligence anymore than it could make a perfect biological one.
I do think superintelligence is likely and dangerous but I don’t think it can produce stuff that is free of trade offs.
That's the real nub of the question: being corrupt intelligences ourselves, how could we ever tell if another intelligence is non-corrupt? Of course I imagine Eliezer agrees with the concern, since he's the biggest proponent of the AI alignment problem out there. He definitely things that, at least right now, we can't produce a "Good" artificial intelligence.
I think the human propensity to rationalize bad behavior, to slip into seeing outgroups as less than human, etc. is a feature not a bug. These things are just the downside of things that allow us to function as well as we do: Tationalizing bad behavior is the downside of self-esteem management, without which we would sink into depression. Seeing the outgrouip as non-human and dispensable is the downside of chunking, which allows us to process complex situations rapidly. An AI modeled on US will have the same features.
I think any AI that has ability to do stuff we would care about is going to have those same trade offs whether based on us or not. This is why I don’t understand the strong case.
About 20 seconds into the "crypto" pitch, it becomes obvious from the very language used that it's a scam and a Ponzi scheme. That's when you check to make sure you still have your wallet, and keep walking.
If you choose to participate, the only question is, Will you be one of the few to benefit from the scheme, or will you be among the majority who lose?
Cash works. I always pay tips in cash, so unscrupulous business owners can't claim the server's gratuity as part of her or his lowly hourly wage (Sorry, Ronald Reagan).
I don't have a lot of sympathy for people who fail to use common sense.
> If you think you’re better at it than all the VCs, billionaires, and traders who trusted FTX - and better than all the competitors and hostile media outlets who tried to attack FTX on unrelated things while missing the actual disaster lurking below the surface - then please start a company, make $10 billion, and donate it to the victims of the last group of EAs who thought they were better at finance than everyone else in the world. Otherwise, please chill.
Is Scott serious about this? This is like "you're not President! How dare you criticize the President?" Or like those Mormons on Usenet who told me that it doesn't matter that Mormons hide their secret ceremonies because if I wanted to know about them I could always spend a couple of years being a Mormon. (Which incidentally would also mean I could get punished for criticizing them, which defeats the whole purpose of wanting to know about them.)
"You must become a billionaire yourself or you have no right to criticize a billionaire" is an awful, awful, take and as a poisoning the well fallacy is symptomatic of the problems that got you guys into this mess in the first place. (And even when X is easier to do than becoming a billionaire, "you must do X or you don't get to criticize X" is an awful take. I probably could become a Mormon, but I shouldn't have to in order to say there's something I don't like about Mormonism.)
I think I'm trying to say something more like - suppose you get some new vaccine endorsed by the CDC and WHO and your doctor, and also have your kids get the vaccine. Then later it turns out the vaccine caused cancer. Are you a moral monster for encouraging your kids to get it, or for getting it yourself and cutting off your own potential and ability to fulfill obligations? I think a common-sense answer is no - you did the reasonable thing by trusting the CDC and WHO and everyone who was more expert than you, and it's not your job to be an expert in medicine.
In the same way, I think charity recipients trusted that something endorsed by Blackrock and Temasek and Sequoia and the SEC was probably safe. Those are the experts, and so if you know that they approve of something, you don't have to do your own research.
If the charity recipient does think they're better than Blackrock and Temasek and so on, then they're probably among the top finance people in the world, and should put up or shut up.
Bit there is no expertise anywhere in the financial sector. It’s easy to make money in good times. Then the bad times come, they don’t predict it, they go bust or the government bails them out. Rinse. Repeat.
There is expertise, though. The average layperson couldn't do a JP Morgan market maker's job without either losing money or trading so little that they quickly get fired.
In general, people in finance will often be experts in "doing finance, conditional on black swan events not occurring".
Bit there is no expertise anywhere in the financial sector. It’s easy to make money in good times. Then the bad times come, they don’t predict it, they go bust or the government bails them out. Rinse. Repeat.
Do you think you'll update your priors that non-experts who avoided this mistake may have some traits worth understanding and copying? Specifically, that expert endorsement is not a substitution for understanding what you're getting into.
I can't help but notice that there was also a group of people who recently refused to get a vaccine that was endorsed by the CDC and WHO, and for healthy children it does appear that turning down the vaccine is/was the correct decision.
If you mean people who hate all crypto, I think I've probably made more money in crypto investing overall (even counting these losses) than they've made in whatever they're doing instead. You can always avoid all false positives by declaring everything a negative, but that's not very productive.
If there were people who were able to predict that eg Binance was an amazing investment but FTX was a bad one, then yes, I would love to hear what these people have to say and update on it.
I'm afraid I'm very late to this discussion but since this comment, as well as Mr. Dolittle's reply, connect with what I feel is an obvious point, I have decided to write anyway.
The point is not whether Binance would have been a better investment than FTX. The point is that everyone who makes billions in cryptocurrencies makes it not by providing useful goods for society but by taking other peoples money. I know that that is the name of the game, and that's the reason I'm not participating myself.
For me, the whole concept of taking other peoples money and using it for charity seem morally wrong and I have no idea why anyone could believe that SBF was doing a good thing, nor why they would accept donations from FTX.
I'll try to avoid making this a "ponzi scheme" point, but the reality must point somewhere in that direction. Because crypto currencies do not have any intrinsic value, they aren't like buying a stock as an investment and making money as we would normally think about it. If you bought Apple stock 20 years ago, you were investing in a company that used your funds to build and increase something of real value. The value of that stock, and therefore your investment, goes up together. Everyone who bought Apple stock went up together, and someone buying your Apple stock from you will have something of value as well. Even if the company went completely bankrupt, they have offices and tangible goods which can be sold off.
Bitcoin, or any crypto currency, is only valuable because other people are willing to buy it. Every single dollar a person would make selling crypto must necessarily come from someone else. So if you make a million dollars, one or more other people lost a million dollars. Now, if Bitcoin stays stable long term, that point gets obfuscated. Many people can buy and sell and not feel like they made a poor decision. But, there are still two problems. One, they can't keep making money forever. Early investors and people day trading can probably make some money, but overall equilibrium means that the average is essentially null - and those day traders are pairing off with others who are losing money. Two, if Bitcoin hits a real problem then unlike Apple or another tangible company, everyone who bought before will lose everything.
I don't hate crypto. I think there's a possibility for using them for purchasing goods and services that can be useful. They are not an investment. They should not be seen as an investment. They are modern day tulips.
Sorry, I forgot to refer to the experts more clearly. There are plenty of experts, the vast majority in fact, who are wary of crypto in general and would always steer people away from it.
Once you've made the jump to "crypto is okay, who should we choose?" you're going to get a lot less consensus. Without an underlying value, you need to develop some kind of trust-review process. Crypto is intentionally opaque, so it's just generally hard to do. Bitcoin has been around the longest, so I'd say it has the most built up trust, but it's history (despite being normal and stable compared to the overall crypto environment) is one of hacks, theft, and deliberate price manipulation.
You can say that people just hate on all crypto, but I feel like the onus is on the pro-crypto side to demonstrate why the vast majority of financial experts, who are on the "risky, best not try this" side of the equation, are in fact wrong.
That is indeed a reasonable point about pointing the finger of blame at "They should have known better, any child in the street could have seen the emperor had no clothes!"
On the other hand, it's also true that "You don't have to be a hen to criticise a bad egg". I don't blame people for trusting that Big Rich Enterprise is on the up-and-up, but there does come a point when you have to apply the same standards to "well I know and like these guys/they have a lot of the same beliefs and principles I do" when it comes to money as you do to "maybe First Third National Local Bapresbydist Church are a great bunch of people but Givewell says don't give money to their weekly collection for the mission in Botswana, give it to mosquito nets".
I think there's a relevant difference with the vaccine case, which is that by now humanity has gone through many more trial-and-error iterations of vaccine creation than of crypto industry wipeouts. If the medical profession was crap at making safe vaccines, we would have noticed and stopped taking vaccines. But if the finance profession is crap at figuring out whether the entire crypto industry will collapse within the first 20 years, how would we know?
I think the situation is more like - it's several hundred years ago and the consensus among smart people is that if you follow the bible you go to heaven when you die, should you follow the bible?
But "common sense" is a fallacy (appeal to common sense), moral agents are supposed to know better. In exactly the same way as "I was just following orders" is not an excuse.
You do know some people doubted the CDC/WHO, and you do know some people questioned their orders in critical historical periods, just like some people questioned FTX claims.
"Hindsight is 20/20" is not an excuse, especially when some people did have the foresight that you for some reason did not.
I think you're eliding an important distinction between "outsourcing your knowledge of what is real" and "outsourcing your knowledge of what is moral".
It seems obvious to me that the former is less of a moral failure than the latter, partially because society's collective factual knowledge is far in excess of what a single person can hope to learn individually (while the same is not really true of morality; put three ethics professors in a room and you won't get better essays on ethics than any of them could produce alone), and partially because the failure states of the former tend to look very slightly less terrible than the failure states of the latter.
Failing to recognize what is real is not by itself a moral failure. But choosing to **act** on a wrong notion is.
For example mistaking thinking that a new vaccine will stop a pandemic is not by itself a moral failure. But using that notion to then violate deontological principles and suspend the constitutional rights of people who refuse to get that vaccine is **definitely** a moral failure.
Deontological principles exist precisely to avoid these kinds of moral failures. You should never blindly trust authority, especially when it's asking you to violate deontological principles. The end **never** justifies the means.
I mean, in a deontological system the "wrong" part of "choosing to act on a wrong notion" is mostly irrelevant. If you think suspending rights is never okay *even if* the vaccine would stop the pandemic, well, whether it would or wouldn't doesn't affect the correct decision.
More generally, while within consequentialism situational facts are more important, most of the *really*-bad outcomes need either a dodgy-to-begin-with morality or a reasonably-elaborate scenario; this is why I said the failure states of outsourcing factual knowledge are "very slightly less terrible" than those of outsourcing morality.
I also think that if you insist on absolute deontology *in emergencies* as a prerequisite of goodness, your set of Good People is going to be vanishingly small; I lie less often than ~anyone I know, but I *will* lie if I believe that telling the truth will get someone murdered. There are, shall we say, few deontologists in foxholes. This is why emergency powers exist in ~every country, despite their potential for abuse.
> f you think suspending rights is never okay *even if* the vaccine would stop the pandemic, well, whether it would or wouldn't doesn't affect the correct decision.
It should not affect the decision, but in reality it does, because people don't actually have principles.
If you are willing to abandon deontological principles in emergencies, then you have no deontological principles.
It's a rule that is not supposed to be broken, you break it, and it turns out it was the wrong decision, but it's OK because no one knew better (hindsight is 50/50), except some people did make the right decision, but maybe they made the right decision for the wrong reasons, except the reason was that they didn't break the rule that should not have been broken in the first place.
At what point do you accept that you are rationalizing your moral failure?
It's difficult but unavoidable that in these situations you have to be able to look back, with the benefit of hindsight, and ask "should I have known differently?" and sometimes the answer is no, even when you were wrong. For a clear example, imagine calling an all-in when you have a straight flush, and losing to a royal flush. You didn't do anything wrong, and learning any kind of lesson there would be a serious mistake.
So when you say "some people did have the foresight that you did not" I'm very skeptical. Did they? Just because they were right doesn't mean that they did. And what Scott wrote is a heuristic argument that they didn't.
I haven't looked much into SBF/FTX or their critics, so I don't know. Who do you think had this foresight, and on what basis? How are you determining that they were right for good reasons, and not just lucky?
I think these questions are very difficult, but asking them is the only way to improve
But are you really sure you didn't do anything wrong? Maybe the other guy cheated and your mistake was playing such kind of games. Immediately dismissing your mistakes is one of the worst things you could do.
At least think about it.
And yes, I know some people did have the foresight because I was one of those people. It's really astonishing how **today** I can make a prediction, a "rationalist" would dismiss the prediction, and then when the prediction comes true claim "nobody knew this could to happen, hindsight is 20/20".
Logic would dictate that if you got the prediction wrong you should adjust your priors and maybe listen to the people who got it right, not use "hindsight is 20/20" as an excuse.
Sure, I agree it's possible, and maybe worth looking at. I just mean it as an illustration of the kind of situation where you can get a bad result but made the right choices with what you knew. It's even possible that the other guy *was* cheating, but you had no way of knowing, and poker is still a game you make a lot of money at (with high expected EV in the future), so you still did nothing wrong. Won't always be true, just saying it's a thing that happens, and that you have to be ready for when dealing with risk
And okay, so the answer to "who" is "you". What about the basis for this prediction? What reasoning led you to think what particular things?
My whole thing here is just that someone having been right is not that strong of evidence for their process having been good (lots of people bet on both sides for lots of reasons, the results data is very sparse), so while I *do* think in the face of having been wrong you should think things over, I think you have to be comfy concluding "actually, I didn't fuck up at all." Which is something I think some people really struggle with, because it gives no feeling of resolution. (And, of course, which some other people do *way* too much).
But it means if you're just saying you were someone who got it right, but not saying why the process by which you did so was good, my response is mostly "so what?" Having been right is less than table stakes to me here.
Scott was saying a bit more than "sometimes you have to be comfy in concluding I didn't fuck up". He said that since the people who said it would die horribly were not billionaires, they don't count as people who knew it would die horribly.
Lots of people fail to be billionaires for reasons completely unrelated to their ability to detect bad ideas.
Also, Scott claimed:
>Listen: there’s a word for the activity of figuring out which financial entities are better or worse at their business than everyone else thinks, maximizing your exposure to the good ones, and minimizing your exposure to the bad ones. That word is “finance”.
This is illogical. It may be literally finance, but it's not a central example of finance. Not every part of finance is equally difficult, so someone could be able to figure out that some things have a high chance of fraud without being good at finance in general.
If you are going to dismiss all the instances in which you were wrong, and dismiss all the people who were right given the same information that you had at the time, why bother calling yourself "rationalist"?
You are just believing whatever makes you feel better about yourself and finding rationalizations to do so.
Maybe all the finance organisations are consistently wrong in certain ways because they have similar internal cultures that lead them to consistently make the same mistakes. Nobody is capitalising on this obvious failure because all the people with money and the capacity to invest it are embedded in the same culture. They respect the financial experts and they all assume that if there were opportunities for profit somebody else would have taken them already.
It occurred to me earlier this year that the value of Bitcoin was probably going to go down, and there was probably some way for me to make money off this fact. But I don't really know how to do that and didn't care enough to follow it up. Given the amount of Bitcoin skeptics in the world, it's possibly weird that more people didn't see this coming and find a way to profit? My suspicion is that the kind of people who don't trust Bitcoin are not the kind of people who have investment strategies, and the kind of people who have investment strategies are the kind of people who trust Bitcoin.
I don't think Bitcoin value will evaporate to nothing, unless governments around the world really crack down on it. But gold seems to have everything going for it that Bitcoin has at the moment (except for being physical, and that's a two-way street).
My general assessment is (and was before the FTX collapse):
That analogy would be more persuasive if the "experts" in the real case were more akin to the staid and cautious CDC and WHO and doctor experts in your analogy.
But VC firms are hardly that. They are in the business of high risk high return investments. They roll dice all the time, and invest in stuff that has a 98% chance of failure and 2% chance of glory routinely. The fact that they invest in something is not a great endorsement of it being a "safe" investment, indeed I would think the more common sense interpretation would be that it is not safe at all -- that it is one of those risky bets that only people who love playing cards in Vegas for big stakes would happily place. If you wanted a safe bet, you'd look at where the University of California invests its pension fund, not a VC firm.
Why do you keep implying that the SEC endorsed FTX? Did they ever go "yep, we audited FTX and it's all legit?" The best that could be said is that they hadn't prosecuted him yet.
Yeah, I was going to make that point in my original longer post, so thank you for doing so. At best the SEC is like a police detective, referring for prosecution crimes that are brought to its attention. Equating "hasn't yet been fined or referred for prosecution by the SEC" to "financially on the up-and-up" is like assuming that because your neighbor hasn't yet been arrested for theft it's OK to invest in his multilevel marketing scheme.
Well, the SEC is another level of entanglement. Apparently (or going by all the Twitter action around this) Bankman-Fried's father knew an official in the SEC who then took the advice of Bankman-Fried himself on regulations that should apply:
This is part of the whole mess, the people involved were connected via their families to a lot of the infrastructure, for lack of a better word, so they maybe got favourable treatment/didn't have to jump through the same hoops as people without those connections. Or simply that they thought they knew better because they'd grown up in families with all this involvement, so they knew what could go wrong and how to get around things and make money the easy way.
I agree with L50L on this one. You would expect the SEC to consult with leaders of the industry on which it is pondering regulation, to do otherwise would be more than unusually ignorant on the part of government, to ignore experience and expertise they need. Likewise the FDA consults with physicians and drug companies.
Whether that consulting ends up corrupt is another story, but unfortunately one that is much more difficult to extract, which is why news media on deadlines usually run with Caesar's wife stories instead.
I had written a longer post including the reasons why analogizing the others to a conservative guardian certifying agency was equally dubious, but decided to condense it and settle for impeaching just one of them in the interests of cogency.
And I'm not really following the argument that they have to be impeached all at once or not at all. You made no distinction among them yourself, so the impression I got from the analogy was that you thought they were all equally equivalent to the CDC. If I establish that even one of them really wasn't, then I think that calls into question the aptness of your analogy ipso facto.
I'm not saying they're exactly equivalent to the CDC. I'm saying something like -
All of us make a bunch of assumptions going about our daily lives. When I write this Substack, I assume Substack Inc will pay me the money I'm owed instead of fudging my subscriber numbers to keep most of it for themselves. When you post on here, you're assuming I won't seed my posts with links to malware that will install on your computer and steal your credit card details. If you're using Windows, you're assuming *Microsoft* isn't stealing your credit card details. We all make these assumptions because it would be impossible to go through life otherwise - if I have to personally read every line of Windows code before trusting Microsoft, I would never have time for anything else.
We do this by vague common-sensical trust networks. I trust Microsoft because the laptop companies trust it enough to include on their laptops, the tech magazines trust it enough to not have NEVER USE MICROSOFT in big letters on every issue, and in general Microsoft is so big and widely used that if they were doing insane evil things I would have heard about it. If Microsoft does turn out to be secretly stealing my credit card details, I think it's fair to say I am a real victim, rather than that I'm partially culpable for not doing all the relevant research and poring-over-code myself.
In the same way, the fact that FTX was very big and all the big companies invested in it seemed like a common sense trust network such that I could deal with them without having to learn financial analysis myself and go through their books or whatever. Common sense trust networks aren't exactly like official FDA approval of a vaccine or something, but I do think they're like eg my doctor agreeing a vaccine is good, or the CDC saying vaguely positive things about a vaccine, and I think the same considerations apply.
Yes, I grasped the nature of the analogy the first time. I'm just pointing out the "trust network" agents in the real case aren't actually that similar to the agents in your analogy. VC firms *and* (so we don't get into this red herring again) several known to be adventurous investment firms grappling with shitty returns in the ordinary market (betting amounts they could easily afford to lose) plus the SEC do not add up to anything equivalent to "the watchdog CDC endorses use of this vaccine" or "80% of the world uses MS Windows as an OS and every online PC/laptop review magazine endorses products using it."
And I'm only making the point because the analogy seems part of a mildly defensive attempt to rationalize why trusting this particular trust network wasn't really a mistake. But certainly nobody here is hating on you for the mistake. Why feel defensive at all? If nothing else, you're clearly in good company. It would be much more interesting to analyze why you (and others) made the mistake in the first place. We all make mistakes like that, and your writing often shines most impressively when it tackles subjects of this nature: here's a mistake I made, and which people commonly make, and let's think deeply about why this happens.
"If you're using Windows, you're assuming *Microsoft* isn't stealing your credit card details."
I only assume that because I assume Microsoft would like to, but make such a mess of even simple updates to their software that they can't figure out how to get it to work.
See the roll-out of Windows 11 and their list of "if your PC/laptop doesn't have one of this set of processors, it isn't eligible for 11". It was transparently obvious they wanted to sell a ton of Surface laptops/tablets on the back of this (the links they kept sending me about 'buy a new machine to update!' were all for their Surfaces) and I'm not going to buy a new PC just to get Windows 11. Never mind that I dislike laptops and tablets and won't use one if I can possibly avoid it.
So ever since I've been using Windows, I updated to the latest version (yes, I suffered with Vista like everyone else) *except* for this one, and that's because I am not going to pay Microsoft more than I have to. 10 works fine and does what I want, why will I buy what I don't need?
(Ask me about their "Software as a Service" subscription model which we're signed up to at work, which I think is absurd for 'hey if you want to use Word, instead of a once-off purchase now you have to pay us for eternity'.)
All companies are evil and want to gouge money out of you. I don't trust *any* of them.
I think there's a big gap between this and what you actually said.
If someone told me that Microsoft was doing something nefarious, my reaction would *not* be "come back to me when you've started a company as big as Microsoft".
Hm, I get your point, but I think there is sth. missing. Suppose your kid is ill, and for whatever reasons many in your community strongly belief in alternative medicine, which has healed many who are around. And so you take your kid to the best healers around, all those persons with the highest alternative medicine credentials, to then discover in the end that they couldn't heal your child. From your point of view, those were the experts. From somebody elses point of view, they were not to trust with your kid's health, because the whole system they rely on is fraud, or, for better comparison, is at least imperfect with regard to certain illness. Maybe it does a good service with, let's say, preventing and reversing lifestyle illnesses early on, but it doesn't do a good service with healing cancer. In this example, the key is not to become a better alternative healer before being bitter about your mistake, it's to realize that this whole sub-sector never was the right place to seek the only or most relevant advice on your child's health. Even though it gave your neighbours helpful advice on nutrition and everyday activities, that resulted in them being much fitter.
The non-profit/ charities' criticims of those who would never have trusted FTX doesn't come from a place of 'I'm a better financial expert than those at Blackrock' or 'why did you trust Blackrock over (their closest competitor being slightly more sceptical of FTX)'. At least in parts it comes from a place of 'why did you think that specific sub-sector which we know is prone to fall for hypes, which is into high risk, high profit, often sees big losses, and likes to play with money that doesn't represent concrete values is the best place to rely on when thinking about funding for your charity?' I think in this case 'be a better billionare or shut up' is misguided.
Just to be clear, my intention is not to argue against VC, billionares or traders, I just disagree with 'as long as you can't do better, be silent'.
I guess your initial point was: don't beat yourself up too much over a wrong decision, and I actually agree with this one. But I think one needs to find a better reason for that, which maybe is just to say that everybody is making mistakes, that's human, and sometimes those mistakes can have costly consequences. In this case, many others made a similar mistake. Making a mistake, even a costly one, doesn't mean you're a moral monster.
I'd probably also argue to seriously reflect on this mistake and lessons learned, rather than saying 'ah, all the experts were wrong as well, so whatever, let's continue as always'.
I think in this case you didn't make a medical-knowledge failure, you made a meta-level failure in determining which supposed-experts to trust.
Obviously figuring out which experts to trust is hard, and sometimes "go with the most prestigious ones" goes badly, but I think it's the best heuristic most people who aren't experts themselves have, and people who follow it mostly can't be faulted. Our hypothetical alternative medicine mom is blameworthy insofar as she rejected the consensus prestigious experts for non-consensus non-prestigious experts.
I think Blackrock, the SEC, etc, *are* the most prestigious consensus experts in the financial world. If it was a failure to trust them instead of someone else, I still don't know who that someone else would be. I don't think there was some other expert affirmatively saying "FTX is bad!" (and I don't think people who hate all crypto on principle but didn't identify any specific issues with FTX should count)
I know (online acquaintance) somebody with a PhD in molecular genetics, a real genuine physical syndrome, and who believes in Reiki healing that has improved their condition and is now offering to do Reiki healing over the phone/internet for people.
Do I 'trust' "well they've got a PhD in a biological science, I don't even have a basic degree in anything, they are clearly the expert here"?
It could be worse. Remember "I'm not a doctor, but I play one on TV...now here's some medical advice...?" That stuff actually works. We're amazingly gullible as a species. It's probably one reason we are also so tribal -- we need *some* defense from the risks of our individual gullibility, and joining one of Scott's "trust networks" can help with that.
>I don't think people who hate all crypto on principle but didn't identify any specific issues with FTX should count
I don't see why - obviously, that's fair that they didn't say "SBF is a fraud but CZ is fine" and didn't predict that FTX was going to fall and Binance be left standing. But also, they didn't claim to predict which specific crypto businesses would collapse in what order - they just claim that all of them will and that all of the money invested in crypto will be a dead loss because the entire business is fundamentally a Ponzi scheme. Given how many crypto businesses have failed, I'm updating strongly to "David Gerard is right".
This also explains why they aren't making money - creating a synthetic put option against the entire sector would be hard, and the margin would be so big that (as Keynes put it) the market can easily stay irrational longer than you can stay solvent. Moreover, the counterparties of that transaction are crypto investors; it's very hard to see how they would have the ability to pay out if you are right, since they are likely to go broke. What you would need is something like a credit default swap backed by a highly-diversified financial institution that has some exposure to crypto (if it has zero exposure, then it wouldn't be issuing those CDSes) but not so much that it would be at default risk if the sector went belly-up. But Goldman Sachs aren't issuing crypto CDSes.
[To simplify the above: I can't bet that "you're going to go broke" because if I win the bet, you can't pay out, so I have to bet with someone else that "he's going to go broke" and I have to be sure that you going broke won't bring them down too; the people I trust not to go broke if crypto crashes are not taking that bet]
It seems to me that there are two positions to update in favour of:
First, crypto is fundamentally a problem, anyone involved in it is not to be trusted. This is comparable to accepting charity funding from other sectors that are unethical, like casinos or tobacco or coal.
Second, any business that is making an especially big deal about its ethics and charity giving is more likely to be unethical/corrupt in its business practices. This is the old "mobsters give to the local church/school" idea writ large.
I think this is a reasonable heuristic in the big picture, but that it is being misapplied in this particular case. Big financial firms have a different purpose and live in a different ecosystem than big medical firms. The environments can't be compared to each other. Big financial firms are legendarily well known for not caring whether their counterparties are trustworthy, or honest, or are going to survive beyond the very short firm. Their collaboration says exactly and only "this is going to make money for me". Any other consequences and participants can go sex act themselves. You only need to have read one Michael Lewis book to understand this. Big medical firms have their well being and survivability tied to solving problems for the population at large and big financial firms absolutely do not. Blackrock making money and cooperating with something is no basis for assuming that anyone else will make money, or not be scammed, or that their partners are not criminals. No one in that industry cares about any of that. That's where most of the margin comes from. The mafia is an expert in smuggling but they are not a trustworthy source of information about the topic. The FSB is an expert in the Russian security state but they are not trustworthy sources of information about it. Etc. If you've missed the existence of this entire category, that's not good.
Also, it's really aggravating that you keep bringing up the SEC - you have no basis for doing so. The link you cite makes the following argument: "The offshore crypto exchange to which US law does not apply was not prevented by the SEC via unicorn powers from committing fraud. With that as evidence, plus the evidence of anonymous people and from extremely trustworthy Republican politicians making accusations that they totally always have a good basis for, definitely maybe SEC was in cahoots with FTX, after all they have held meetings, or maybe not but definitely a "bad look".
It's an offensively poor and non-empirical argument, suggesting that you don't know much about the processes and practices of the SEC, the legal, regulatory, or habitual constraints under which they operate, or even the basics of what they have or have not done concerning FTX or the crypto industry, and even worse, you don't even know what you don't know and that you don't know it. But claims that "the SEC endorsed the bad guys" .. are a very convenient thing for a lot of people to claim/believe right now for obvious reasons.
> I think in this case you didn't make a medical-knowledge failure, you made a meta-level failure in determining which supposed-experts to trust.
Yes, that's a fair summary of my point.
> I don't think there was some other expert affirmatively saying "FTX is bad!
I'm not expert enough to give you a lot of detail here. But if I was to go to any financial expert close-by, and say: I need money for altruistic cause x, and I want something that gives me more money rather than less money, but I also want something that is low risk, because (all kinds of reasons)' I kind of doubt FTX would have been the advice of the day.
I don't think anyone was arguing from first principles that FTX was the best company to get money from. I think FTX was offering people money, and you could either say yes and have FTX money, or say no and have zero money. I think the bar for accepting an offer like this is pretty low - basically just "not so fraudulent that it would be offensive to their victims to accept" - and that it was very fair to be surprised when in the end FTX failed to clear that bar.
I agree with most of that except the last sentence. It should have been very clear to anyone that FTC had a decent chance of failure. Especially after JUN.
You can say you don't like Mormonism, but if you say after a Mormon sex scandal "how could anyone have missed the red flags? It was so obvious!", then people have the right to not believe you. After all, if they were so obvious, why didn't any of the many critics of Mormonism see them? Why didn't the police see them?
Your Mistakes disclaimer, the opening statement; I don't promise never to make mistakes. Grammar and sentence structure. You never promise to make mistakes. Correction of a double negative should read: I don't promise to never make mistakes. Trivial? yes, but totally changes the meaning.
This is cutting against the grain of this thread, but does anyone here still take COVID seriously, or are all of you basically over the pandemic? My brother and his girlfriend still mask up when going to certain places, and they also got the latest booster, and my corner drugstore and my parent's cafe still requires masking, but otherwise, I rarely see people masked up. Am I right in assuming it's pointless to care about COVID still?
I have resumed wearing mask on public transport for the winter season. Not really because I am concerned about Covid specifically, but rather because I think it is a good general hygiene standard.
And when I get cold symptoms I test for Covid. I plan to isolate a little more strict with Covid than with other colds, but not much more. (Isolating is no longer mandatory in my country even with Covid).
I pretty much stopped caring about COVID back in February, but I wore an n95 when flying at my parent's insistence and I may decide to get the latest booster just in case (mainly to protect relatives when traveling for the holidays, rather than myself).
I have a risk factor that makes my risk approx equivalent to that of a man in his 70's. I go where I like, but mask in indoor public places, get the boosters, and run a big air purifier in my office. The degree of risk of my getting very sick with a respiratory illness is going to wax and wane as flu, covid and RSV cases do, but it's not low enough to shrug off. If I were 35 with no risk factors I think it would be. The other concern I have is Long Covid. I'm sure a lot of things called Long Covid are just slower-than-average recoveries, symptoms that in fact having nothing to do with covid, neurosis, malingering etc. But I'm pretty positive that does not account for all the Long Covid cases. I had what I'm pretty sure was a post viral syndrome myself 20 years ago and it absolutely ruined 3 years of my life. I never want to go through that again. Just for the record, I am not indignant that most other people are not masking.
Mostly my behavior has returned to pre-pandemic norms, and my comfort level is squarely there.
The only exceptions to that would be one-offs like if I was exposed, I'd stay in and wear a mask to go out until I could reliably test negative, or if I have a friend who is uncomfortable for some reason I'd mask, hang out outside, or whatever.
I got vaxxed, and got a booster last December. Since things went back to normal in Ireland about 6 months ago, I have too. Pharmacies used to ask you to mask, but now even they don't. I was drinking at the bar in my local pub tonight, and have been frequently.
I'm not convinced by the booster thing at this point - I think if you are working in public health or something maybe it is useful. But it doesn't prevent infection, and other than reduce the chances of infection for a couple of months. all it does is give your long-term immunity a prod IMO. Encountering virus in the wild probably does that just as well. I don't think it would hurt to get another booster this Christmas, and if my family fret about it I probably will. But I am basically treating Covid as just another endemic virus.
As far as I know I haven't got it, but I quite likely have had it asymptomatically. Then again my sister had it a few months ago for the first time (her son who lives with her got it previously but she wasn't infected). She said it was nasty, but she was over it within a week.
I mask if I'm sick (and probably contagious), because that is one of the few cultural changes I'm hoping lasts after the pandemic. Otherwise I don't worry about COVID beyond what I have to with regards to my family's jobs (my mother works in healthcare, so I get tested before she comes to visit if I might be sick)
I'm taking it seriously in the sense of getting new boosters when they come out, but at this point I think that's about all it is reasonable to do if you're not in a very at-risk population.
I'm still avoiding gatherings as much as practical, wearing an n95 when it's not, not eating indoors, etc. I expect to do that until:
a) I'm convinced that long Covid odds are well under 1% per infection, ideally under 0.01%. (Currently the studies, which are all flawed in various ways, more research needed, etc. etc., seem to point to closer to 1 in six, with 1 in 20 about as low as it gets. While vaccination helps it seems to be more on the order of 40-50% reduction-- maybe less, maybe a bit more, not vastly more-- than my preferred .999...)
I'll be very glad to see a persuasive study that shows it to be much lower, and due to the low quality of the existing data that's not ruled out. But thus far I'm still waiting.
Advances in prevention and treatment would also work, of course. But we're no longer doing Warp Speed-type programs and even funding for existing research has been largely blocked by budget struggles. (The administration is going to try again in the lame duck, but I imagine that it will fail again.)
So I expect that to be back to the usual timeline for new drugs, i.e. not getting to approval any time soon, even if there's something to approve. (And I've read Derek Lowe for enough years to know how many candidates there are for each drug that works.)
b) prevalence is low enough that it's reasonable not to expect to be infected once or twice a year in the absence of precautions. I don't go out of my way this much to avoid flu because 1) it doesn't seem to have anywhere near the same rate of sequelae, and 2) flu prevalence times infectiousness meant I could go 5-10 years without getting the flu. That's clearly not currently the case with Covid.
c) the expected seriousness of long Covid is assessed to be lower than it currently looks. (I'm concerned both about life-changing but not immediately deadly things like long term fatigue or permanent anosmia, and actual life-threatening issues like greatly increased cardiovascular risks in the years following.)
d) social/economic pressure makes that unsustainable.
(Or, I guess, e) I actually decide to stop worrying and love repeated SARS-Cov-2 infections, but that seems less likely.)
Even if I go "back to normal" in some sense, I still don't expect I'll ever, e.g., fly without a mask again. I routinely got colds and worse from flying and I'd be just as happy not to go back to that even if Covid risk drops below my threshold.
I got the bivalent booster some weeks back. I am considering masking up in the most crowded environments I frequent, namely the subway and the supermarket.
n95 masks help substantially (not as much as I'd like, significantly more than zero), and I don't find them all that uncomfortable. YMMV.
For those looking for comfortable ones, I like Kimberly-Clark's duck masks. 3M's Aura models also get good ratings for both comfort and filtering capacity.
Don't order from Amazon. They're full of counterfeits. (Or were last time I was buying from them.) This seems to be a general problem with filtration products for them-- I stopped buying water filters there a few years back after I noticed that I was getting fakes (that weighed half as much as the genuine article).
One thing that perplexes me is the continued prevalence of cloth and surgical masks. In 2020, sure: there was a shortage of n95s, and something was better than nothing (more true pre-omicron). And a lot of people were wearing the minimum they could get away with to comply with mask requirements anyway.
Now no one is required to wear a mask if they don't choose to (at least most places-- I think my health care system still requires them), and there's no longer a shortage. So I'd expect things to have sorted into no mask ("No mask? No mask!") and n95/KN95/etc. Instead, I still see, among the minority of mask-wearers, a fair number of cloth masks that are probably very porous and minimally protective, and surgical masks that leak out the sides.
(And that I at least found much less comfortable than an n95. Surgical masks were sweatier, and I never found one that didn't fog my glasses. Having the seal and space to breathe an n95 offers is a huge improvement!)
Maybe it's cost. But it doesn't really seem to correlate with income. (And it's possible to make n95s last if one is motivated-- buy seven and rotate them daily till they're visibly dirty or damaged.) And KN95s especially are pretty cheap, though I was never able to get a good seal with one.
"The King In Yellow" was definitely a play about covid. That's my headcanon now. Hastur is pleased.
Working in grocery, I've also remained baffled by the prevalence of barrel-bottom masks, often in addition to nose-out wearing. Have always wanted to see what's in someone's head when they run that kind of heuristic. What sort of strange information did they get about covid? Is it just a "better than nothing" rationalization? A virtue signal? (The strangest is still occasionally seeing people trying to make do with, like, bandannas or pulling their shirt collar up...) Sometimes wonder if it's just a collective-action problem, and if there was a loud enough PSA that "Actually Most People Won't Judge You Weird For Not Masking", perhaps a significant fraction would sigh in social relief. That's the only reason I still do it, on occasion: not wanting to ruffle any tribal feathers. Beware Trivial Social Inconveniences.
(Conversely, some people enjoy the social acceptability of masks for convenience - they don't like having facial expressions read, do like the reduced makeup requirements, whatever. I'm sympathetic to such people. Still seems like that wouldn't account for a huge fraction of maskers though.)
Yes, agreed! I don't understand this phenomenon, and I very, very much don't understand how we *still* have people wearing masks with their nose sticking out in places where there's no requirement to mask at all.
I can only conclude it's just a fundamental lack of information, except I also don't understand that, because if you think the risk is high enough it's worth ameliorating, how do you then think "but it's not worth researching for five seconds to find out how to effectively do that"?
Oh come on! LHN didn't say or imply anything remotely like they're delighted to do it. Nor did they say they planned to do it the rest of their life. And you & I both know that if LHN is masking it's because they are concerned that covid for them would be something worse than sniffles. It's a lowdown, unfair tactic to misstate what LHN said while sneering at the absurdity of the distorted version you present. It's as though I responded to your post this way: "Trebuchet's take is that the world has a right to see their glamorous face 24-7, and that their courage and realism about covid are as rare & impressive as a perfect LSAT. Lord save me from these egomaniacs!"
Covid isn't the sniffles, as the 2300 Americans dead of it this past week could attest. But at this point no one's making you protect yourself or anyone else against it.
As for understanding: Maybe we assess the risk of Covid differently. Maybe we assess the burden of masking differently. I'd guess probably both.
The recommendations were for those who are still interested in comfortable, effective masks at least some of the time. Which by my observation is a minority, but not vanishingly small fraction of the population.
Hm, It's arguably worth getting an omicron booster, especially if you're in a vulnerable population, but aside from that yeah pretty much (unless you're severely immunocompromised, but then you'd have to take extreme measures to avoid flu anyway).
It's a huge exaggeration to say there's nothing you can do to avoid it without spending your entire existence in self-imposed lockdown. I go to stores, movies, etc., but mask in these indoor public places. I go to work, but run a bit big purifier in the office in lieu of wearing a mask and asking the people I meet with to mask. Before parties & the like my friends and I test. That is very far from self-imposed lockdown, & so far it has worked to keep me from getting covid. Of course I am aware that I may still get it, but the point of my precautions isn't to guarantee that I never get it -- it's to minimize the number of times I get it. Zero is my preferred number, but if my precautions mean I get it once, rather than 3 or 4 times, that also seems like a worthwhile goal, worth the trouble I'm taking (which in total is maybe 2 hrs per week of wearing a mask, testing once a month or so, and flicking the switch on the air purifier when I arrive at work).
>All of the stuff we did besides the vax was a giant waste of time and money
You're ignoring the period of time early in the pandemic when the medical systems were being genuinely overwhelmed, there weren't enough ventilators or even beds to go around, and "flatten the curve" was the (even in hindsight, correct) Narrative. Merely slowing transmission made sense as one of the terminal goals.
(There's some debate as to whether the US could have done better with a faster/stronger response, or whether it was naive to expect that to ever work when partisans were creating a low-trust cooperate-defect scenario, but the above isn't predicated on that)
Everything after the booster, though, much less clear that it wasn't a waste of money.
The country got caught up in the dream that we could eliminate covid, the way we have various other disease. At the beginning, I bought the idea that that was theoretically possible, though I didn't see how we could do it in practice -- locking everything down for a few months seemed like it would do terrible damage to the economy and to a lot of individuals. I now understand that covid just isn't the kind of disease you can eradicate the way you do polio, and I'm sure docs & epidemiologists at the alphabet agencies realized that from the beginning. Why didn't somebody in government say so? Why didn't somebody come up with a plan that optimized our chance of having the best outcome with this damn disease, given that eradication was not possible?
>I'm sure docs & epidemiologists at the alphabet agencies realized that from the beginning.
Other variants of SARS and MERS were successfully contained. It doubt it was immediately clear just how much more difficult SARS-CoV-2 was in that regard.
" Why didn't somebody come up with a plan that optimized our chance of having the best outcome with this damn disease, given that eradication was not possible?"
They probably had one, but it failed on the critical item: the populace must go along with it.
Sweden was one of the few countries where enough of the population didn't successfully protest (cue politicians jumping in) the state epidemiologist's original plan.
In retrospect, should it have been a red flag that FTX didn't buy a billion malaria nets and distribute them in Africa?
EDIT: Aka, should it have been a red flag that an entity claiming to be Effectively Altruist was only doing high status effective altruist activities not low status but effectively effective effective altruist activities?
Maybe, but as Scott points out FTX was acting like it was idea-constrained not funding constrained, and it still put $0 into bednets. Maybe "AI safety" is so much better than bednets that you fund all AI Safety ideas first before buying a single bednet, but FTX was spraying money everywhere.
Out of sympathy, here is the very short, very vaguely accurate cliff notes summery:
FTX was a very successful cryptocurrency exchange run by a man who was very connected to Effective Altruism. As an exchange, it was supposed to hold people's cryptocurrency and make money off of fees when exchanging from one cryptocurrency to another. It turns out that instead of holding on to people's money, they were using that money to speculate on cryptocurrency. Also there was a lot of financially shady stuff going on that is too complicated to summarize, but most financial people agree was a deceptive attempt to make a particular cryptocurrency that FTX controls look better than it was. Another successful cryptocurrency exchange in competition with FTX realized they were puffing up their cryptocurrency, and did some maneuvers to cause the price to drop really far really fast. As part of this many people wanted to withdraw their money from FTX, and then FTX stopped letting people withdraw their money: this revealed that they didn't actually have the deposits because they had been spending them on speculative investments, which they were not given permission to do. FTX is collapsing as a company, and a bunch of people lost the cryptocurrency they had deposited with them.
This has led a lot of Effective Altruism people to say publicly "Don't steal people's money in an attempt to make even more money: it's wrong, even if you planned to use the money you made to make the world a better place."
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America. You, a random AI researcher who tried Googling “who are these people and why are they giving me money” before accepting a $5,000 FTX grant, don’t need to feel guilty for not singlehandedly blowing the lid off this conspiracy. This is true even if a bunch of pundits who fawned over FTX on its way up have pivoted to posting screenshots of every sketchy thing they ever did and saying “Look at all the red flags!”
Scott,
I very rarely comment here, but I follow you voluntarily. I don't think you're a bad guy, I've learned some interesting things from you. But this reply is really a joke, I'm sorry. I'm a random well-educated liberal, and it's been wildly obvious to me that FTX was a Ponzi scheme for years, and more importantly, not just me, but a thriving crypto-skeptic community.
You'll see right in the biography that this guy's been covered by the MSM for years. Ever since Mt. Gox blew up, there has been a super-abundance of critical analysis of crypto as a giant scam.
Does this sound like a trustworthy basis for assigning financial value? No, it does not. This is CNBC.com - I am not deep diving here.
I am neutral on your point as to whether NGOs should feel *bad* about taking money from a criminal. They were presumably using the money to do good, and it's easy to get confused and not know if and when the scammer was crossing the line from unethical lying and cheating to criminal behavior. That's an individual ethical decision. But I guarantee you that the Democratic party, ProPublica, and the SEC were extremely aware that SBF was an untrustworthy scammer, although they may not have all known he was crossing into criminal behavior.
The details of how SBF appears to have committed fraud were not obvious and well known, but crypto was readily knowable as a Ponzi scheme that was consistently bringing ruin to naive people. You absolutely could have done better due diligence to understand that, and so could any NGO who wanted to understand with an hour of research. Of course, it's easy to do research badly and not realize that you have done a bad job, so I'm not personally scorning anyone who was rugged, but those people absolutely should hold themselves accountable for mucking up something not overly difficult.
You don't need a prediction market, you just need a reasonably diverse base for information intake and a willingness to take adverse information seriously. Crypto exchanges have been blowing up on the regular for a decade, all the info was there in plain sight.
Scott is guilty of being human. This is a friend-of-a-friend kind of thing, if people he knew were saying that people they knew said these guys were okay, why would he doubt them?
It's also the technocratic strain in Rationalism and EA that believes anything done with modern advanced high-tech methods has to be so much more efficient and better. I'm suspicious of technocracy so crypto always sounded to me like a very elaborate way to get scammed, especially when some people were enthusing about how it was untrackable and you could safely buy your guns/drugs/illegal but shouldn't be stuff with it.
However, it's always very hard for people to believe that others who share (or seem to share) the same general beliefs as they do, and move in the same circles, and are involved in the same good causes, can be up to no good. This may not be the strawman 100% rationalist who takes nothing on trust and always runs tests on if they should trust their spouse when their spouse says they love them, but it's a lot more human and a lot more personable.
You are overstating your case (just like many commenters before you); crypto is not ENTIRELY a scam.
BUT, more importantly, it is an obviously sketchy industry, just like, say, personal development advice ("this book will change your life!") or medical supplements or something like that.
Any "crypto-billionare" should be automatically viewed with suspicion unless proven otherwise (and in my eyes, number of crypto moguls who proven themselves beyond suspicion is so far exactly 0). Failure to see that does indeed seem like significant error of judgment.
>I'm a random well-educated liberal, and it's been wildly obvious to me that FTX was a Ponzi scheme for years, and more importantly, not just me, but a thriving crypto-skeptic community.
Yes, but there are probably 5 other things you think are obviously Ponzi schemes (or otherwise criminal/corrupt) that aren't.
Of course for any controversial claim, there will be people who believe it and people who dispute it. When facts come out that prove one side right, everyone in that side will get to crow about how 'obvious' it was the whole time and get to feel superior to everyone on the other side.
That doesn't actually mean it was obvious and that the other side didn't have a coherent rational story for their beliefs, or even that the 'winning' side necessarily evaluates evidence in this domain better in general. You need an N of more than 1 to prove that.
Also, more specifically: I'm 100% with you on the side of believing that crypto and web 3 has been a speculation bubble from that start, motivated by grifters and confidence agents. But that doesn't necessarily mean that *every individual actor* on the scene is intentionally committing fraud and lying about their intentions at all times. You still have to make judgement calls in individual cases, and can be wrong for negative judgements.
I agree with you. It's hard to strike a balance in the reply and in how to talk about these things. You're right, I don't have any evidence that some notional "side" that is "crypto skeptics" is generally better at evaluating evidence about scammers than a notional side that is "crypto friendly". I also agree that not everyone who believes in crypto, or even believes in and markets it, is a "scammer" in terms of committing securities fraud, etc. Speculation bubbles are weird. You tell people "buy this coin, and then its value will go up and you'll be rich", and that.... is true at the time! And will be true for an unspecified future amount of time. Everyone is being completely honest and accurate... until they stop being accurate later, and the "dishonest" part often comes in from secondary lies and deception about the nature of the market, the marketplace, and the financial games happening offstage. Which not everyone is aware of, or fully understands when they are told about.
I don't really think that "crypto skeptics" are even a side, or a coherent community, etc, although they probably overlap with market skeptics and small-c conservatism that may have some overlap with "conventional NPR liberal"... whatever that trope really represents about its own population base, etc.
Anyway, to me personally, I wrote the reply because Scott's reply suggested or implied, to me, that everyone who fell for this had no reason to take stock, no reason to doubt themselves or their process for evaluating trustworthiness, and there was no easy way to know about any red flags. I don't agree. If you got rugged by this, in this day and age, you do have a reason to doubt yourself and change your methods going forward. Mistakes are part of life, but learn from them to avoid repeating them. Most of all, I want to push back on the last part. The red flags were widely reported and readily available to be known.
There's a very important difference between "Crypto doesn't have real value/potential, and FTX is a company profiting off of people playing casino-like games, which I think is bad" and "FTX is committing actual fraud and/or is insolvent but hiding it"
Even believing crypto is basically a scam, it was plausible that FTX was no morally worse than a regular casino, which in general are (I assume?) non-fraudulent businesses taking advantage of people throwing their money away.
(caveat: I don't actually believe crypto is a scam, I just think even if crypto is a scam, FTX was plausibly non-fraudulent")
This is a valuable distinction, and other replies also draw a distinction between "crypto is a speculative bubble that will pop and selling it is shady" and "committing financial crimes". This is true... I guess. Not every crypto-related business is committing financial crimes, and not all of them are even lying to and/or concealing material information from their customers, which is also not always a crime, although it is always shit behavior.
To know that FTX was committing crimes, you needed to be paying attention to more specific info. However, that info was also out there. The Bitfinex'd link is a gateway to lots of that info. How many offshore crypto exchanges are committing massive financial crimes? all of them. Every single one. No doubt whatsoever. And there's plenty of evidence. But it is true that this is a complicated and messy topic and that nonexperts can interpret evidence and facts in multiple ways and also get bored and tired and distracted and not know what is a crime and what is true.
So I don't think everyone who didn't realize FTX and every other offshore crypto exchange platform are criminals, is a bad person or an idiot for that. But if it is your job to figure these things out, and you didn't figure it out, you can and should have done so. It wasn't a case where the needed info wasn't available; it was.
Just as a matter of diplomacy, you're probably better off not relying on David Gerard for a big chunk of the support for your argument. At least not here.
Care to elaborate? I have no idea of the backstory here. Anyway, he's by no means and in no way unique or critical to the argument, I just can't be bothered to hunt down other sources for the community.
I don't think it undermines Gerard's position on crypto, specifically (it's probably made him a _little_ more opposed, but only in the tribalistic sense that red/blue affiliation tweaks everyone), but it's an issue in other spaces, and I say that as someone who interacted with him better on tumblr.
Blockchain technology has the possibility to change the world for the better but we have yet to get it truly woven into the fabric of our society and the regulating powers that be may ruin it because it makes so many of their institutions obsolete. Right now it's like the internet in 1996. No idea how to invest other than in the broad idea that it will move forward. Cryptocurrency, on the other hand, doesn't have a super compelling use case for developed economies other than being like a wildly speculative commodity.
My favored examples of real world blockchain utility mostly come down to enforcing government transparency. As an example, Nigerian land speculators have found that, instead of buying land from existing owners, it's often much cheaper to bribe a land registrar to surreptitiously alter the title [1].
Consider a land registry on a public blockchain where records can only be updated with two of the following three cryptographic signatures:
A) The existing land owner as stated on the blockchain
B) A land registrar official
C) A state-level judge
Such a system doesn't make it *impossible* for corrupt officials to illegally alter records, but it makes things much more challenging by enforcing transaction transparency on a record system that the government doesn't directly own or control.
Alternately, imagine a system where govt contractors are paid in cryptographic tokens that can only be cashed out for untrackable dollars if and when they're paid as salary into individual worker-owned accounts. When paid from one contractor to another, or shifted between expensing units within a contractor, those transactions live on a public blockchain. If you want to figure out, for example, where exactly the money went for NYC's 2nd Ave subway, it's a database query rather than waiting for the NYT to spend several hundred hours doing investigative reporting [2].
In neither of these examples are you necessarily *replacing* an institution, but rather substantially *reforming* the behavior of an institution by making malfeasance harder to hide.
The primary case at present for blockchain fully replacing an existing institution is, basically, when you want to coordinate crime. Specifically, something like bitcoin is useful as a way of illegally circumventing capital controls and currency pegs in failing states with a hyperinflationary currency.
There are speculative notions about how blockchain could enable things like opt-in transnational states [3], or be used for public goods financing [4], or perform some kind of secure online voting system [5], but none of that's here yet and I don't understand any of this stuff well enough to confidently opine about viability.
In neither of the first two examples is blockchain necessary to achieve the benefits -- if a government is willing to designate a blockchain as its "source of truth" for land ownership it could certainly do the same with a third-party database hosted and run by a neutral party outside their jurisdiction. (For the record, in neither case do I believe it's realistic that a government would actually do this).
And requiring all government contractors to be paid in internal accounting dollars that could only be transferred internally unless paid out to worker-owned accounts might work _better_ run through a centralized Federal database, as there could be a strong validation process to ensure that the worker-owned accounts were actually worker-owned, and that submitted expenses, etc. were valid, contractor organizations had actual existence as corporate entities with Federal tax numbers, etc.
The fundamental argument for blockchain solutions for these types of problems is that they remove the possibility for modification of the source of truth or the transactional history and therefore remove manipulation of the database as a source of corruption, but they are by no means corruption-free (51% attacks can enable violation of the integrity of the chain, bugs can enable all sorts of malfeasance and hacks, and even barring the above, just because a transaction is valid doesn't mean the input data was correct or the participants are actually who they say they are).
I think your comment about whether it's realistic for the government to "actually do this" is important in the discussion. You are correct that the blockchain is not technically necessary, but if the alternative solution is wildly implausible, isn't there value in the blockchain solution over the current system especially in places with significantly more "old school" corruption? Assuming the tech was there to implement this fairly easily and you could really improve transparency--wouldn't millions of powerful local officials feel threatened and work to prevent adoption?
I actually agree with your criticisms and should clarify that I don't presently endorse any of those use cases as, necessarily, a good idea. My intent was to scope out the best presently plausible use cases for blockchain, not to present blockchain as the ideal solution to the problems under consideration.
There are institutions, they are just code instead of people. You can read the code and opt in. Anyone can make new code at any time and people can move to it freely.
Anyone can fix any problem by writing new code and moving to it.
The blockchain part simplifies the distribution and running of code and establishes truth (immutability) and identity (private keys).
Im handwaving a whole lot here but maybe you get the idea?
I think the specific case where a blockchain is useful is where there is a mechanism to do that which would (in a non-blockchain situation) result in a specific guy making the change, but specific guy can also be bribed and he just makes the change without going through the mechanism.
Like, the law is officially supposed to be changed by a public vote in an elected assembly, but in practice, the clerk can just change things and you can bribe the clerk and not bother with the vote. In that sort of low-trust situation, you could put the law on the blockchain and set up the DAO so that the only way to change it is for a majority of members of the assembly to directly input their approval of the change - ie the vote takes place on the blockchain. That cuts the clerk out of the loop (obviously, you can still bribe the legislators, but that tends to be more expensive).
I think these sorts of problems are relatively unusual - ie problems where the official records are changed by bribing the records-keepers.
Also, I think that voting on a DAO is sufficiently technically difficult that I wouldn't trust elected legislators to do it correctly; and they would get a staffer to do it, and now you're back to the original problem: you can just bribe the staffers.
If EA were a single indivisible idea that includes the FTX affair, that would be pretty bad.
But luckily, it is a series of distinct assumptions. One can be skeptical, for example, of the idea that one should prioritize a high expected value, even if the modal outcome is neutral or negative, but that would be no reason to doubt much more basic EA assumptions, like "not all dollars of charity have equivalent impacts." Or "poorer people generally benefit more from charity than richer people, and by global standards, very few of the poorest people live in Western countries."
Unfortunately for EA, those assumptions are not unique to EA thinking, and EAs now have a lot more baggage.
EA has a few decent insights, some of which are not unique to EA - though EA is making positive inroads to making them more widely considered. Also unfortunately for EA, some of the other hills them seem inclined to die on involve perspectives with limited value for most other people. AI Risk, animal welfare, "weird" Rationalist trapping like polyamory. These were already troubling to a group that was just becoming well known. Now that FTX is getting linked to EAs, they are going to have a much harder time getting positive messages out.
Hence including this in the topic of "updating". My prior on "big political donor is involved in money laundering" is high to start, of course. When the donor in question is a finance guy I update higher. When he appears to be a force for good in the world like SBF I revise downwards. When he's caught doing actual fraud, I revise upwards again.
Admittedly, my original prediction of technically-legal finance shenanigans may be much higher than most readers here, but I don't -feel- cynical. It's what allows me to laugh off critics of Bernie Sanders saying he has three houses and a couple of supercars. Well yeah, but I'm sure he got all his assets in ways that are technically legal. He's in Congress, what do you expect?
(Huey Long, when asked how his personal wealth had grown 10x more during his time in office than his gross salary, famously replied, "Only by exercising the most exTREME frugality.")
> The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”
You know, I asked myself some similar questions after being cheated on by a spouse and friend. In the end, I decided the act of trusting itself has intrinsic value. It's not infinite, so you have to take some care, but if you trust 100 times and get burnt once, and that once isn't the end of the world, maybe you came out of it OK?
Also, telling everyone proactively how, when you were fooled, it made you feel bad and take more care in the future means placing trust in you is a better bargain than placing trust in any other random person.
"...but I am just never convinced by these calibration analyses."
I'd be more convinced if the *polls* seemed to take this into account and get better over time.
One thing that I think hurts a lot of this is that folks really want to assume independence because it makes the math so much easier. The underlying reality often isn't independent.
I have a super-short writeup about this and how I think it helps to explain the 2016 election errors by the pollsters.
Correlations are, indeed, real. But they usually revolve around a "hidden variable". Still, one should realize that that hidden variable probably exists, even if one can't identify it. (But sometimes it really *is* just random variation, and won't persist.)
I used FTX and left some money there way longer than I should have because of EA/SSC/Rats implicit and explicit vouching for SBF. Obviously I don't blame anyone but myself, it wasn't a big portion of my portfolio (sadly I can't login to check specifics) and I'd still trust the related communities more than most but it still seems like a big collective L.
Why are you demanding proof from me? I didn’t endorse it as proven! But it is an angle that was not mentioned by Scott and I thought it should be included as a possibility since he is trying so hard to figure out what to think about this.
Someone else posted this link here but maybe you didn’t see it. I recommend it because even though it isn’t directly relevant to the Ukraine theory, it argues very strongly that SBF was put up to create FTX by much older and more experienced figures and that FTX never made any sense as a legitimate business.
> make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively be emotionally dead to anybody who goes above a certain threshold.
Do you think you should add to that list a certain blogger who advocates for a monarchy in a time when the more religious party coalesced around a figure who sent a mob to disrupt the peaceful transition of power, given that your association with him has pulled his audience into yours and given your halls a well deserved reputation for racism and fascism among those who have had the good sense to be driven away from that stink? Or are you still being charitable to bad ideas from a dude whose qualifications consist of having a blog with smug essays on it? (I suspect you're about to learn that headlines are short-lived, and those trotted out as stars for a few years can be abandoned and ignored in a mere turning of the times. If you want to stay shining, and I'd like that personally, you're giong to have to think about your mistakes. Thiel won't even look at Moldbug if it doesn't suit his purposes anymore.)
The problem with the SFBA Rationalist cult is very specifically that their anti-credentialism led them to discount the importance of any establishment knowledge and utter crankery is the result.
Someone else called it: hubris. It's hubris that causes EA/Rationalist types to attempt to solve the same problems as everyone else believing their magic online blog juice will prevent them from making the same mistakes as everyone else, so they make not just the same mistakes but the same mistakes from a past era.
Credulity isn't a virtue if an entire community forms around someone who sorts out the most credulous and willing to believe the narrative of genius even secondhand...
Only in a very bounded sense — but still a meaningful one: Scott does believe there's positive value in reading him. Some would argue that Moldbug is the type of insidious proselytizer who is not *safe* to read even if you go in telling yourself "I disagree with him on core moral points and always will, I'm just curious to see what his object-level arguments are"; who will pollute your thinking with memes — in the Dawkinsian sense — that will make your thinking trend more right-wing over time without your conscious awareness.
Personally I want to think better of Scott's skill as a rationalist than that; that he would fall for such a "honeytrap"; but a more paranoid/cynical person could very, very easily made the case that this happened to him. That even as he tried to reject the overall worldview, he allowed disjointed Moldbuggian ideas and assumptions to creep into his thinking little by little, Cathedrals and left-swimming Cthulhus and the lot; and that a critical mass of those has caused him to become much closer to being reactionary-adjacent than 2012-Scott is likely to believe reading Moldbugg could or would make him.
I am skeptical that "dangerous meme" / infohazard is a useful concept. Like "misinformation ", people are only going to apply that idea to their outgroups as a way to discourage testing and exchanging ideas, to police social boundaries. (As we see here with Impassionata's excellent impression of a NYT commentor.)
Having said that, even if "trusting Moldbug not to publish infohazard" is an eccentric interpretation of the OP's usage, it certainly qualifies as intelligible.
I struggle to believe that you wrote this, or could reread it now, and with the intent to change our minds or persuade anyone here to your perspective. Maybe I'm wrong, maybe you did genuinely think that this was persuasive, but that's pretty hard.
If you wrote this to express your hatred and disgust, and more broadly the hatred and disgust certain factions on the left have for people here, I'd like to assure you that we are all well aware of it and have been for some time. It has been expressed by a wide variety of writers and methods for well over a decade. It was shocking and hurtful years ago; it's normal now.
But, in summary, the rules here are very clear and always have been. I'm not sure anyone could classify what you've written as kind and it's really unclear, to me at least, what is necessary or true of any of it.
PS, seriously, the non-troll way to write this is some variant of "Has this FTX misjudgment caused you to reevaluate any of the controversial writers you've written about in the past or beliefs you've become associated with."
I struggle to believe that you wrote this with any intent but to lecture me on how you wish you were spoken to. I speak as one without regard for your feelings because facts don't care about your feelings. The fact is that Scott Alexander's writings and their **consequences** drove people away. A lot of people away.
I'm here because I like some of Scott's work and still have some hope for reasons I don't fully understand myself.
Scott Alexander platforms someone who actually advocates for a strongman to take power, which no matter what word you put on it (monarchist or fascist) is essentially, in the consequentialist view, a call for violence against minorities of various idpol stripes unless you are a blind fucking idiot too credulous and too easily taken advantage of to be considered a serious political writer. You might not believe it, but enough people do (and those people have a voice, too).
> it's really unclear, to me at least, what is necessary or true of any of it.
Think about it then, and trouble me not with your insipid handwringing about tone, for I don't care. If I am banned for speaking truth then I shall laugh and cross Scott Alexander off my list for good. I always hoped he&his would come around.
Cry less. If Scott Alexander's communities harbored racists and fascists because of Scott Alexander's choices, I want to believe that Scott Alexander's communities harbored racists and fascists.
Ah, I figured the PS was a bad move, ah well. I was genuinely curious whether you thought it might convince someone or, more seriously, whether there was some third option I hadn't considered. That happens sometimes.
I dunno man - you don’t sound cold so much as inflamed, and the jury’s out on whether you’re a disingenuous little shit or not. Catch more flies with honey than you will with vinegar, though.
Don't think you understand me or my motivations. I'm interested in pointing out the flies and urging for their prompt removal. The failure mode of kindness is unknowing indifference to the malign and deceitful. The racists and fascists know their foul full opinions will get them removed.
Ah, the Passion Flower continues to flourish even when transplanted to a new patch, I see!
Impassionata, I have been highly amused by your writings over at r/drama, especially your version of history about engaging with Rationalism. As my admittedly flawed memory recalls it, you spent most of that time arguing that this time for sure, latest investigation was going to end up with Trump in jail. The last prediction of that kind you made was that within two weeks' time he was going to the slammer. Naturally, this did not happen. Naturally, you were joshed about it. And naturally, you left, set up your own site, and went overboard with the marshmallows.
I'm glad to see you seem to be doing better and have found a new happy home!
I'm glad to see you fell for the marshmallow facade. As to your admittedly flawed memory, it seems like it is indeed as flawed as you admit.
All of my statements about Trump were in the vein of what _should_ happen. Were we not captive to the twin bindings of boomer ideological fog and weak rationalist political confusion it might have happened; recall, of course, that Nixon lost political power very swiftly. This is a mark of how low we have fallen. I was wrong of course about the level of corruption in our politics.
So my statements about time were in this vein: that at any point the axe might fall. Now we see what that axe falling looks like, and the interesting times are ahead for the Republican Party: will it eject Trump like snot into a kleenex, and will the snot metastasize?
The real frailty of the ignorant in the culture threads seasoned by the abominably stupid "You Are Still Crying Wolf" was this: that among generally atheist populations used to seeing blind faith in the nation's citizenry and all the dangerously poor thinking that denoted, it was in denial about a fascist movement on US soil. Shall we drop the signs?
* the more overtly religious party gathering under a strongman type politician via a xenophobic impulse,
* incorporating threats, from that politician, of physical violence against journ*lists
* making blatantly dishonest claims about the veracity of elections
* separating children from their parents in camps
* engaging in a physical crackdown on a protest and then touching a Bible
* convening a mob and sending it in the direction of the hall of Power in which the peaceful transition of Power was occurring
* continuing to belabor the lie in order to further division in what should be a united country.
This is fascism. Fortunately the American public could see what Scott could never seem to admit or even understand: that "You Are Still Calling Wolf" was the beginning of the end of his career as a political writer taken seriously outside of his small circle of neoreactionaries brought in as an effort to expand his audience. (/r/sneerclub is populated by fifteen thousand subscribers: people who were repelled by the stench.)
Still, it's good to see you, BothAfternoon (right?)! It's always good to have a personal herald.
In a world where the mainstream academic opinion of Trumpism is that it's fascism and slatestarcodex has 50000 Reddit representatives, are you saying you don't think it's a big signal that 15000 people walked away from SSC?
American mainstream academic opinion is that Democrats are good and Republicans are bad. That part is completely predictable and independent on who happens to be the Republican candidate today. (I am not saying that the opinion is wrong, by the way. Just that it is constant, so we cannot use it as evidence for anything specific that happened recently.)
Also, 15000 is a relatively small number compared to the number of internet users: you could easily find 15000 supporters of a mainstream theory, or 15000 supporters of a conspiracy theory. (Probably even easier for the latter.)
But most importantly, 15000 people in sneerclub does *not* mean 15000 people who walked away from SSC. Many of them have probably never been SSC fans in first place; and would be sneerclub members also in a parallel world where Trump does not exist. There are all kinds of reasons for joining a nerd-bashing online club. Mostly, because it is fun... if you happen to be that kind of personality type.
"All of my statements about Trump were in the vein of what _should_ happen."
Ah, my delicate little petal, it was that you said "he WILL be going to jail" not that "he SHOULD be going to jail" and when your prognostications did not happen, you flounced off.
Well, we can all rewrite our personal history, and the good folk over at r/drama are not going to be too credulous one way or the other.
As for the rest of it - why do you keep expending so much mental energy on a failure like Scott etc?
There's still yet time my voluptuously verbal friend!
> when your prognostications did not happen, you flounced off.
That is how you tell the story, but the way I tell it is that I was just sick of being browbeaten and not being able to return fire. People can bully leftists a lot in a lot of subtle ways that don't catch moderator attention.
I flounced off because being unable to say "that's racist" or "that's pretty much just fascist" is pretty precisely what drove themotte into its present state.
The same pattern emerged on Discord: a community under Scott Alexander with a sidebar community that was for the mask-off racism. It was uncanny and very interesting how it essentially mapped Scott's own mind: a connection to the racist/fascists that was never allowed to be fully 'conscious' as it were.
> why do you keep expending so much mental energy on a failure like Scott etc?
A good question. He wrote something that impacted me personally once, might be the only real answer.
Out of curiosity, just how many SSC and related elements have you signed up to? I've never gone near the Discord, so O fiery-hued blossom of indignance, you are more devoted a follower of Scott than I am!
The subreddit and the discord. Maybe I was a devoted follower of Scott, but his fruit didn't fall far enough from the rotten rationalist tree. He's better than they are, or could be.
I, uh, so Scott Alexander and rationalists generally are secret fascist theocrats, as evidence here's bad behavior from their weird Berkley sex cult?
I jest but there's a core thing here that confuses me. I'm not sure if you've attended rationalist or EA meetups but they're really, really different from the people attending your average, say, Trump rally. And clearly these things are tied together somehow in your mind and I'm genuinely curious what connection you see. Like, I know Thiel backed Trump for awhile but your average rationalist and your average, say, Oathkeeper are just phenomenally different along virtually every significant personality aspect. How do these groups work together, if at all, in your mind?
I respect your mission deeply. I got directly involved through trying to cut through the shitty politics of these people so that's just the beat I walk, without intending to distract people. (The reality might unfortunately be that these people are too thick in their denial for any of what we say to have an impact in either direction...)
Multiple approaches are necessary in cult deprogramming. I'm hoping to wrap my participation up before too long, I've spent an alarming amount of time in this ego charity of mine...
I strongly suspect there's a motte and bailey definition of scientific racism coming soon, where the motte is some wannabe eugenicist breaking out the skull calipers, and the bailey is anyone who knows what IQ and crime statistics by race look like.
Oooooh, sorry, sometimes I forget that people can live in parallel worlds.
It's because this is one of the few places where smart conservatives can have open conversations. If it's a credible institution or big site, we get censored off in fairly short order. If it's Fox News...it's Fox News, I don't want to have a conversation there anymore than I would want to in the CNN comments section.
To perhaps make this a bit more concrete, I think there's some really interesting arguments around feminism in Lasch's "Women and the Common Life". I'd like to discuss them somewhere and will probably post a few points of interest in the next open thread. I can't post it in a general area or on most social media, I've seen enough people get banned and depersoned to avoid that. Imagine a conservative posting about feminism on Reddit, sounds miserable. However, the best conservative discussions I see, DSL and the Distributist's comment section, aren't really that great.
So yeah, there's no secret Reactionary signal Scott is sending up in the sky. Sincerely, this is by far the best place for intelligent conservatives to discuss things. Everywhere else literally bans us or is a conservative "ghetto".
PS, if anyone does have recommendations for another conservative site with a thoughtful discussion forum, I would greatly appreciate it.
You're sort of right, but in a way that doesn't make a great case for yourself.
I mean, the main counterpoint is that maybe there's a reason those views aren't tolerated by anyone intelligent elsewhere.
"Don't tolerate people who champion race essentialism as racial science" is a recently-erected Schelling-point-now-Chestertons's-fence that came about as a direct response to the holocaust and other similar genocides. If you want to tear down that fence, you have to be very sure you know the consequences before doing so.
>Sincerely, this is by far the best place for intelligent conservatives to discuss things. Everywhere else literally bans us or is a conservative "ghetto".
Once again, consider how saying, with a straight face, "Every place with intelligent conversation bans us and the places that allow us are filled with idiots" reflects on the things you want to say. Maybe everyone else is wrong, or maybe the marketplace of free ideas has judged your ideas to not be marketable.
And it's certainly *possible* that everyone *is* wrong on some things. Hell, it's even probable for at least a small portion. At some point, though, you have to wonder about the sheer number individual of things you're claiming every place with "intelligent discussion" is wrong about in order to ally with all the people in those "ghettos".
(Also, I can't help but point out the irony in using "ghetto" as your insult of choice when implicitly defending accusations of racism)
>Imagine a conservative posting about feminism on Reddit, sounds miserable.
Speaking of parallel universes... some of the biggest anti-feminist communities on the internet have been hosted on reddit. They've cracked down on *some* of them, but many still exist. (I mean, take a look at /r/conservative)
...Unless you're saying that it would be unpleasant because conservative reddit is one of the "ghettos". In which case, I agree that engaging with conservatives on reddit is pretty miserable. I much prefer the quality of their conversation here... on average, at least.
This has been engaged with before. The condensed version of it is "If you set up a space that is free from witch hunts, you end up with three principled people and a zillion witches".
The further problem is, who is a witch? As the post on the Hexenhammer showed, the description of "well she's old and mean and lives alone and has a pet cat and we had a quarrel and then all my milk went sour so clearly she's a witch" isn't good enough.
All we can do is discourage people who go around casting spells and putting curses on people when they do that, and leave the mean old cat ladies alone if they're not riding around on broomsticks.
You want us to conduct a witch hunt. Scott is not, nor does he want to be, Matthew Hopkins.
I'm beginning to think that conservatives' claims that any form of censorship of their views constitutes a witch hunt is an implicit admission that you know you're witches, consorting with the political devil.
(Or maybe it's just that everyone on this site likes making references to things Scott just wrote and it's a coincidence. On the other hand, you and Treb are the only two who've made that reference in this thread...)
I'm not asking for a "witch hunt". You're perfectly free to conduct your witchcraft elsewhere, and express yourself in other ways here, so long as you're articulate. I think the trend of teenage SJWs digging up that one time you said the N word when you are 14, or straight up twisting facts to prove someone is "problematic", is a pox on civil discourse norms.
That being said, I *am* saying that you shouldn't be allowed to practice witchcraft openly in the [idea] market square, while encouraging others to join you, without consequences - and those consequences should probably involve being removed from the [idea] market.
Adding on to that, our mayor, Scott, probably shouldn't be openly reading books on witchcraft and loaning the books out to others to read and - God damn, the longer I torture this metaphor, the cooler you come out as sounding. Be gay do witchcraft.
But first you have to define what is "witchcraft" and it's not as plain, easy and obvious as "well clearly everyone can recognise witchcraft when they see it".
Like you, I was (and indeed am) very, very positive about what is witchcraft. There are beliefs, philosophies, and current social paradigms that I think are witchcraft, and worse than witchcraft: the child sacrifice to Moloch, demonic worship.
Like you, I wanted to ban that. There are things I think are pure poison, hateful, abhorrent, damaging to society and harmful to the self.
But you know what took me a long, hard time to come to grips with? That people are entitled to believe these things. That they are entitled to talk about these things. That they are just as entitled to stand out there in the public square as I am.
I don't know if you identify as a liberal or a progressive or what, just that you're 'not conservative'. Well, I was as zealous as you to burn the witches and the heretics. Except my heretic and witch is someone on your side, probably, and the views that you think are right, good, and proper.
I've had to work hard to learn to tolerate the people in pointy hats on broomsticks on your side. Especially when many of them have long mocked the judge's position on obscenity ("I know it when I see it") when it came to things they wanted made legal, but are now turning around and applying the same test themselves: "I know witchcraft when I see it, and I demand it be banned"
So, first off, this is what I get for not reading the original post in enough depth to see that it was very "HBD" specific and giving a general conservative gripe instead of something specifically on topic. My bad and I apologize.
I'm not interested in defending HBD in general but I'd expect this trope to apply to them 10x. I can't imagine most HBD sights are fun or interesting places to post because most of the HBDers I see here are...not great people. But if there's 1000 cruddy HBD sites and one decent site you can talk on, I don't think there's any great mystery why they keep showing up. As for why the mainstream shuts them out so hard, I agree that it's for a host of good reasons. I don't think the logic you presented is particularly appealing, mainstream society has been wrong on wide spectrum of bipartisan issues within living memory, but I do think this line "If you want to tear down that fence, you have to be very sure you know the consequences before doing so" is incredibly true.
And yeah, conservative "ghetto" is a bad term but I genuinely can't think of a more communicative term. It's the general idea that, because network effects are real, most people will stay on websites they don't like because all their friends are there and only the weirdos go to alternatives. At which point the alternative website is full of weirdos and it's not a very good place to post. Remember Voat? (1)
So yea, bad post on my part, and I would have written it very differently if I'd spent more time reading it, but I fundamentally don't think this is complicated. HBDers come here because, well, it's nice here and (I'd bet) the nicest place that will tolerate them by a wide margin.
> sometimes I forget that people can live in parallel worlds.
Miss me with this postmodernist subjectivity bullshit. You so-called 'intelligent' conservatives lived in a world where the Christian fascist theocratic movement installed a strongman who sent a mob into the Capitol to disrupt the peaceful transition of power and now demand my respect as if you have a place to stand in your "oh we just live in another screen."
Yeah, and your screen is wrong and stupid.
> So yeah, there's no secret Reactionary signal Scott is sending up in the sky.
Wrong.
> Sincerely, this is by far the best place for intelligent conservatives to discuss things.
"Intelligent" conservatives in what way? Are you here to reinforce your consensus reality? That would be the Dunce Trap.
> Everywhere else literally bans us or is a conservative "ghetto".
Oh you're still in a 'ghetto,' you just live in denial about it because you chase off anyone who will challenge your views.
Oooh, WoolyAI, you should know from what the Passion of the Flower posted, so how do I get in on this Christian fascist theocratic movement? I keep seeing the progressives telling me that this is happening all around and the Christian fascist theorcrats are running the place since the 80s but I keep not getting an invite, and I can be a Christian theocrat no bother!
Is there a uniform? Do we have our own flag? Are there medals? What are the hours, only I wouldn't be able to devote meself full time to the oul' racism:
I'm not an HBD guy, and I don't read Moldberg, but as a conservative who has hung out around Scott's blog for years I can say that it's obviously why his content is compelling to me: it's because he only banns people when they're breaking the Rule of 2 or otherwise being uncivil, and he doesn't mind discussing an idea with someone even if he disagrees with it. I've learned I can trust Scott because he doesn't say things he doesn't believe just to make sure people see him as having the "right beliefs". He cares more about truth than what is heretical to consider. Lately he has decided not to talk about certain topics that cause him to receive more hate, but I trust that he doesn't lie about the things he does write about.
Why do you think his content is compelling to people?
Scott _knows_ why: he invited them in consciously in an attempt to secure more readers; those leaked emails told us this. Whether or not he's learned from the experience is, perhaps, the open question. (The unfortunate part of writing for a large crowd is you necessarily become a subject.)
I support prison sentences for those who commit crimes and those who induce others to commit crimes. I support social opprobrium for those who believe that what's needed is a single individual to take on all the power because that's, truly, dipshit moronic idiot grandiose bullshit. Anyone who can't see that is an idiot who needs to look into the bloody history of monarchy.
> one or two degrees of separation
Your relation to politics is completely broken. You seem incapable of modeling ideologies as directed by leaders except as some spherical cow model of nodes in graphs. I think you could benefit from posting online about politics a lot less and reading about political theory a lot more, for at least a few years.
I don't come in here and say that anyone and everyone who supports SFBA Rationalism should be shut out of public discourse because that essentializes a movement. Thus it is that your attack of Black Lives Matter is braindead stupid, your false equivalence is rejected.
Reading about political theory? Might as well read the Kabbalah or Family Circus, for all the relevance that load of ivory-tower navel-gazing has ever had to events in the real world.
Hmm..I would probably have said nihilism is more the province of the sophomoric. But maybe that's not very different from what you mean, so I suppose I mostly agree.
I call it like I see it specifically because I saw racism and fascism as protected speech in Scott Alexander's enclaves. Every single one of them had more ambient rightwing shitfuckery than average. And the moderators seemed clueless to it: it was just the background.
Proud. Ignorance. So common among the SFBA Rationalists.
> For example, the democratic state of Weimar Germany empowered a particular monster we're all familiar with and who probably out-killed most historical monarchs.
Thank you for making my point for me. The thirst for a monarch type government is nothing but a thin veil over this desire for strongman politics, made in ignorance of the degradation into violence inherent in empowering a single individual.
> I'm also not interested in silencing its proponents.
There's nothing virtuous about refusing to reject ignorance on some imagined principle: you end up hearing out idiocy and popularizing it (Scott, this is to you).
"The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”
You can. You have to. If prediction markets are what works to help you, then go prediction markets.
I've been through this with the entire sexual and other abuse scandals in the Catholic church. It's really awful when you have to accept that all the horrible stuff is indeed true, and one reaction is naturally "How can I ever believe anything or anyone ever again?"
I'm still Catholic despite it all. It's the wheat and the tares, and we just have to try and do the best we can until the end. There will always be bad actors, but we should not let that make us doubt everything.
One of the deep indignities of the timeline that we live in is that the Catholic Church realized it was infested with pedophiles and took action in response...and that action wasn't to reassign all the pedophiles to a remote parish in Northern Québec where they could be clubbed over the head and buried in shallow graves.
Your comment says loads about you. All good. Nice to share a thread with you and the others here who in the main seem compassionate. I too hope Scott can get past the initial shock and err on the side of trust. What did Reagan say, "Trust but verify"? Trust should be the default for any happy person. To live a life filled with suspicion and distrust is a horrible fate. So my advice is to not globalize distrust because one institution/individual broke the covenant. Trust and compassion are the glue that holds humans, families and societies together from the forces that splinter and sunder us.
Three.months ago I had never heard of Ea, prediction markets, FTX, or any number of guests of the modern scene that everyone else on this thread takes for granted. I am old and out of touch; the world moved on while I stayed still.
So my opinions only have limited value; they are what one might hear from a reasonably well educated liberal, put in cryogenic storage in 1979 and just thawed in 2022! I and my impressions are truly from a different era.
But here, for what value there may be, are some opinions.
1. EA seems a good concept, but I detect a little hubris that might lead to cultic qualities down the road (cults were a problem in my era,) But it would be a kind of crowdsourced, decentralized one without the usual charismatic leader. There are obvious downsides to diverting philanthropic energies from small scale present benefits to notional large scale far future benefits. One starves the present to feed a future that may never instantiate. Best, seems to me, to establish some ratio, perhaps 80/20 to do both. The EA community, if it's identical with the rationalist community, seems to over think things a bit; to get lost in analysis and minutiae. Might be best to take a break ever so often and drop the glowing screens. Go outside and hike or do physical labor; put on jeans, boots, and work gloves. Ground. All of this stuff is extremely ephemeral after all!
So much for EA, both admirable and problematic
FTX and the financial world that gave birth to it. Mixed blessings, but badly in need of regulation. Seems fragile, has questionable grounding in real value, so falls under a strange variation of the Red Queen Hypothesis. If notional value and traditional "real" value are competing for resources perhaps we need to look at Competitive Exclusion concepts? Over my head and pay grade, in any case.
Prediction Markets. Ingenious innovations (tho' variants must have been around for a long time). Seem to be gambling under a different name. Are they regulated?
ACX:. You all are collectively the most impressive group of thinkers and writers I've ever seen outside of graduate seminars. I'm seriously not in your league and in over my head besides being behind the times.
That's all. TL;DR! (the time traveller learned finally what that meant. Short attention spans in this era!)
Just so you know, after TL;DR you're supposed to write a short summary for those who did indeed consider it TL and DRd it. It can be a bit more blunt, like so:
Regarding your penultimate paragraph, I think there is a strong bias towards noticing higher quality content, and therefore ranking yourself lower relative to it.
For what its worth, I was scrolling through a whole bunch of comments that seemed to contribute little (to me, at least), then I got to your comment, read it, learned a few new terms (e.g. Red Queen Hypothesis) and took a moment to respond.
So at least from this sample, it seems to me like you are certainly not a below average contributor here.
That said, while it is nice to know the limits of one's own credentials, knowledge, and abilities, I think that credentials are a poor measure of intelligence, and intelligence is a poor measure of being right.
Many people can be intellectually gifted, but if they don't bother systematically using those gifts to find truth, then their abilities are not really relevant. The democratization of knowledge through the internet and other media has increased the ability of moderately intelligent or credentialed people to gain a great deal of knowledge on topics of their choosing.
So while I think that you are probably very much in the same league as median posters here, even if you weren't, that would hardly be a reason why you wouldn't be entitled to an opinion or to your own contributions.
I'm wondering whether we should be expecting to see a system of contractual courts evolve in the crypto space. I can't remember what David Friedman calls them.
I've been dubious about the idea because I'm not sure of where the initial trust comes from.
"I'm wondering whether we should be expecting to see a system of contractual courts evolve in the crypto space. I can't remember what David Friedman calls them.
I've been dubious about the idea because I'm not sure of where the initial trust comes from."
I think the crypto crowd is trying to do this with 'smart contracts' ... also on a blockchain so no initial trust is required. Ethereum is supposed to be one of those blockchains that enable smart contracts (I think).
Smart contracts are basically automated programs that do blockchain operations. They can be used to make automatically enforced rules, like "if X happens, send Bob a bunch of money," and that's useful for finance stuff.
But the issue is that contracts can be buggy. Maybe you can trick the contract to releasing the money early, or sending the money to Alice instead. In that case, a self-enforcing contract is worse than useless - all that cryptographic power is being used to ensure that your money is irrevocably transferred and no court can force it to be returned.
I don't see a way around this problem, because crypto is designed from the ground up to make it impossible for any one entity to revoke transactions - you would need the entire network to agree to that. (Which has happened - once in Ethereum's history the developers decided to roll back a big hack that stole a lot of money and would have killed their proof of concept - but isn't really reliable.)
The problem is that there's really no provision for enforcement. When someone does a rug-pull, the only consequence is that they lose whatever name they were operating under. (There can, of course, be consequences outside the crypto-community, but that's really saying "We need government regulation!".)
FWIW, I'm generally extremely skeptical about the value of crypto-currencies, except as a means of doing illegal financial transactions. (Even then there needs to be some external enforcement mechanism, or all you've got is a con game.)
Does anyone know of prominent voices in the EA movement that were warning ahead of time that FTX was possibly fraudulent? I ask because although Scott addresses that EA doesn't support such things in theory, there's another question about whether EA is just basically competent at evaluating risks. That's supposed to be their whole thing, and yet in one case where we know the final outcome, they blew it about as hard as possible. If you are worried about, say, AGI due to the messages put out by EA, you probably need to take another good hard look at those beliefs.
I think the problem with FTX is worse than if it were fraud. I think SBF thought he understood finance better than other people, and went headlong into something that's been a known failure mode in financial situations for many years - not keeping enough cash on hand to account for withdrawals.
This aligns with one of my criticisms of EAs, that it's mostly made up of intelligent young people who equate intelligence with knowledge and don't know what they don't know. SBF should have known that using client money to prop up his other business even while incurring losses was a known failure mode and that it could easily end in disaster. But apparently he didn't know, and didn't have anyone in the room with him who could have helped with that. If he had a Goldman Sachs executive advising him, he might have been told long before this blew up. It would have limited his reach, and wouldn't have been as exciting on the way up, but it may have prevented the drop.
I feel a little bad for anyone who put a child in charge of their money, but frankly that's how we all learn lessons. The guy is 30 years old now, so people were giving a 20-something billions of dollars in a highly speculative field for him to run out of Hong Kong and the Bahamas. If they didn't know those things, then that's on them too.
from what I understand, it wasn't exactly a Ponzi scheme- it was more like an Enron scheme, where FTX used it's own crypto, backed by actual shares in the company, as collateral for loans to make prop trades. It collapsed when a rival realized what was going on and sold its own holdings of FTX-backed crypto, leading to the Enron-like collapse of FTX.
Which is to say, it was definitely a deliberate fraud (P>0.99 IMO). A bit more clever than the usual crypto fraud but there was no way to do what they did by accident.
I could certainly be wrong, but his behavior in this collapse doesn't seem to me to match someone deliberately defrauding anyone, but instead someone who got caught doing something very dumb and not realizing how stupid his decisions leading up to it really were.
I find it hard to believe that someone capable of setting up such a financial scheme like that would be completely unaware of the financial history of such things, especially considering the recency of Madoff, Enron, MF Global, etc. And it's not like he's some tech bro that doesn't have any background in finance, both he and his accomplice gf had enough finanical background to know exactly what they were doing.
Seems a lot more likely that his current behavior is a sociopathic attempt to play dumb.
That's certainly possible. My gut is still that he's a very intelligent idiot, who knew how to work in financial markets but not why there may or may not be rules and separation. Keep in mind that he was like nine years old when Enron happened and still only around 16 when Madoff pled guilty.
Yea I'm basically the same age as him and have never worked in finance. I know about Enron/Madoff and why investing customer funds is a huge no no. The idea that he didn't is laughable.
I have pretty severe seasonal affective disorder, instead of dealing with antidepressants and light therapy each winter I wondered if I should just up and move down south to Texas or Florida, does this work for stopping the disorder? Have any of you done it and what do you recommend?
have been debating this as well, and have been reminded just how much the winter sucks for me yesterday when we got hit with our first snow of the year and I had to scrape off my car. It took me until recently to realize that even though almost everyone complains about winter, not everyone feels it as severely as I do (and it sounds like you as well). I was reminded reading this post (https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/) linked the other day that it's not universal to never feel fully awake 4 months of the year (even with the max dose of Wellbutrin in my case).
If you're like me and have a partner who loves the winter or is otherwise not sold on the year-round summer of Florida, I'd recommend doing what we did and trying out somewhere very sunny but still wintery (in our case, St. Moritz Switzerland, which Google tells me has 322 days of sunshine a year, and the high altitude means it's intense light as well.). This experiment helped me verify that the cold is just as big a player for me as the light, but it certainly was a colossal improvement over the northeastern US. California, Nevada, New Mexico, and Colorado all have options for winter + sun if you are in the US and want to stay domestic (or if you need more options for possible job locations if you're moving permanently and don't work remotely).
If moving is not in the cards right now, one or two week long trips to Central America in the winter feel like an injection of serotonin that last for up to a month after returning home for me. I work remotely, so I travel on the weekend and don't even take any time off. They're not very complex or exciting trips; I just rent an Airbnb with a patio and bask outside on my laptop all day :)
Instead of moving your whole life just rent a place for a few months and telecommute.
Also, as a Dallas resident, our winter is gloomy and cold too, though for (for sake of example) a native Michigander the cold is probably small potatoes. It was dark at 6 pm here yesterday ( Nov 13)
But you may need to consider a Puerto Rico stayover as well.
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America."
I can't speak to the finance side of things (though are they looking at 'is this a scam,' or are they looking at 'will this make money?' those are two different questions and for a while it made money). But the other examples don't seem great to me?
Taking people's money, so long as it doesn't come with strings doesn't usually mean you've vetted/agreed with them, quite the reverse in fact. And the SEC thing was that regulations were needed, which just seems transparently correct at this point? Now, SBF was presumably trying to use them to limit competition, without limiting his ability to commit what really looks like fraud, but it's not at all clear that he would have succeeded in that, even if everything hadn't collapsed.
> 9. The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”. So I was pleased when Nathan Young figured out the obvious solution: make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively [...]
This is being taken way out of context to show what total freaks we are. I don't think people realize this is tongue in cheek.
I have a question about election odds and prediction markets generally -- anytime I see backward-looking analysis, it all says they're well calibrated, etc. But those analyses I've seen seem to just take one data point of odds during election day -- "if the odds are 60% on election day, that candidate wins 60% of the time" for instance.
But that doesn't seem helpful to me -- what about 1 year in advance? 6 months in advance? Have those odds proven well calibrated? I'm very surprised these markets don't hover very close to 50/50 until about a month out.
"But that doesn't seem helpful to me -- what about 1 year in advance? 6 months in advance? Have those odds proven well calibrated? I'm very surprised these markets don't hover very close to 50/50 until about a month out."
I think assuming a 50:50 split when there are lots of unknowns is one of the classic statistics mistakes.
1 year out, for example, I'd assume that an incumbent US Representative would have something like a 90% chance of retaining his/her/its seat. At the beginning of a football season (college or NFL) every team does NOT have an equal chance to win the championship. Etc.
Point taken, but they still seem overconfident/overly reactive, and to be clear I was talking about aggregate House/Senate odds, not just individual candidates. I'm happy to be proved wrong or my misunderstanding corrected, but I am just never convinced by these calibration analyses.
It is a good point, though. Many prediction markets are open for much longer time than 3 months, and it would be good if analyses took account the prediction forward time span. (Good predictions would more impressive, too.)
The markets longer than three months would be primary markets, candidate markets, etc. The GE market at best can open post primary but personally 3 months is the longest I'd say you can make useful predictions.
> But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through.
I don't actually know what order of magnitude "a little bit" means here. I'm not a VC or anything, just a fairly boring person who happens to batch their charitable donations to once per year for convenience (yes, I already know this is not generally how charities prefer funders operate). I suspect when Scott asks for potential charitable funders he's talking about bigger game than me, but if a four digit sum of money would make an outsized difference somewhere it would be nice to know about it.
Think the triage process Scott's working on will publish a shortlist of in-trouble charities soon, for small donors like me? Or is there already a post on the EA forum or somewhere that I haven't seen?
Have people done calibration studies on prediction markets? E.g., take all the markets that had $0.60 as the final price for yes and see if 60% of those resolved to yes. I'm especially curious if prediction markets show any systematic overconfidence or underconfidence in their results.
I do think a huge areas where our social/political tehcnolgoy is "shitty" and "underdevleoped" is not trating high level bureaucrats and elected officials like jurors.
Poeple in these positions should be "sequestered" and should have their connections/relationshuips highly scrutinized. You would probably not allow a juror on a trial if the jurors former boss's duaghter was the one on trial.
Being elected to say congress should be a 2 year ticket to a bunker in the nevada desert where there is no access for lobbyists and where the infromaiton that is allowed in is tightly constrained to publicly availble sources. Maybe not quite that extreme, but close.
These are the most important positions in our society and the standards are just atrocious in terms of who gets selected. Fuck the president some cycles seems like they probably aren't even a top 10% person for the job. Like there are literally millions (tens of millions?) of people who would do a better job.
On point #5, EA does not endorse doing clearly bad things, but prominent EA people such as MacAskill have definitely endorsed taking big risks. SBF's thinking and behavior is very much in line with the EA idea that an action that will probably fail is justified if it has mathematically higher expected value compared to other options.
For instance, in What We Owe The Future (appendix) MacAskill argues that we should not be afraid to "chase tiny probabilities of enormous value"—in other words, we should take actions with the aim of improving the far future, even if the likely outcome of those actions is nothing. He draws an analogy to the (supposed) moral obligation to vote, protest, and sign petitions, even when again the likely outcome is nil. In MacAskill's example, say you can press Button A to save ten lives, or Button B to have a one in a trillion trillion trillion chance of saving one hundred trillion trillion trillion lives. If you're a normal person, you press A and you save lives. MacAskill says you should press B, even knowing that the likely outcome is nothing.
This is directly analogous to SBF's idea that we should weight money linearly (in other words, rejecting decreasing marginal utility of wealth). SBF is willing to "accept a significant chance of failing" in exchange for a small chance of doing a lot of good.
So MacAskill and SBF both endorse taking actions with a large chance of failing if the expected value is high enough, whether that's speculating with customer funds or pouring resources into uncertain projects with a very tiny chance of shifting the far future in a positive direction.
Now there's a distinction between "very risky actions" and "clearly morally bad actions"...but that line is not so bright. SBF took a risk (morally as well as financially) and failed. But no one would be criticizing him if he had succeeded. FTX took big risks, as EAs advocate, and failed. But EAs should understand and acknowledge that frequent failure is a predictable outcome of taking big risks, and, given these values and assuming the math is correct, failure doesn't prove that the actor's underlying thinking was wrong.
"But no one would be criticizing him if he had succeeded."
That depends on the specific mechanics here. If he took money out of trust accounts, I don't care if he succeeded or failed, it's not his money to gamble.
(I've mentioned before, MacAskill's worst-case scenario being no change is dangerously wrong, and is why it's important to focus on causes whose results can be directly observed. You can press button a to save ten lives, or you can press button b to have a one-in-trillion chance to help trillions AND a one-in-million chance to kill tens of thousands. The far future loves to juke you.)
As SBF showed, the EA framework is a great tool for rationalizing whatever cause resonates with you. He likes AI stuff, so he donates to that cause. Then the EA branding makes it seem like he is doing real charity in the service of mankind, rather than tinkering with his hobby.
What the heck kind of suppression is this? FTX is an almost unprecedented blowup with huge political implications because they were the second largest Dem donor after Soros, nobody knows key details because they were radically non-transparent, by definition SOME kind of conspiracy was involved in a situation like this so ANY investigation of who did what will be possible to disparage by calling it a “conspiracy theory”, and this is a frigging OPEN THREAD.
In case this is the problem: if you're replying by email it'll make your comments top-level instead of responses, so you need to go through the app/site for that.
Yes, that was the problem, but I’m not going to bother trying to correct it now, people can just go through the tree below my previous comment to see what I was responding to.
My correction was actually wrong too as the original comment was a reply to a convo down thread about some rumored conspiracies involving Ukraine. But I am not 100% certain of the whole thing!
Nefarious activities involving more than one collaborating person which were concealed from the public.
I do not think SBF was the only human being who was aware that FTX was failing and was hiding that information. His girlfriend Caroline Ellison who ran Alameda must also have known, but probably many more people did.
I was just sitting here, before this open thread, thinking about how my inner critic is excessively harsh and just kind of an asshole. Then I open your thread and see what I think looks like you being hard on yourself for what seem to be similar reasons. You’re doing great work, Scott. If you never trust a scammer at least once in your life, maybe you are missing out on lots of chances to do real good?
Crypto is such a big scam. Lots of VCs and investors are into it because there's money in it. That doesn't mean people have to have amazing insight to beat their assessment of FTX, just basic due diligence that while FTX might be a money hose at present it's built on scams. Crypto is useful for crimes and some extremely limited database functions. It's a scam! Always has been. So yeah, easy for people with a basic understanding to beat investors on the question "is this a reputable organisation" even if they should defer to the investors on the question of "whether or not this potentially criminal enterprise will make money."
You what I just remembered? NFTs Remember all that? Over about a 6-month period it went from new thing, to "is this as stupid as it looks?", to that famous long YouTube video ("Line Goes Up"), to, hey, a bunch of scams a rugpulls revealed, and now nobody thinks about it anymore. Loosely speaking. (I notice that, in my semi-expanded view of this entire thread, there are 0 mentions of it.) Maybe there was something to be learned there.
I think "scam" is overselling it. It's a poor investment, certainly. But as a transaction mechanism in certain cases it has a niche. The inherent problem is that people started treating it like they would a new silicon valley startup rather than what is was: ForEx. And huge returns on ForEx are highly unlikely.
It's the old old rule: "if it seems too good to be true, then it is".
You don't get easy money like that, there is no such thing as a free lunch, and eventually the cows come home and the chickens come back to roost. The problem with electronic trading like this is that it is all in the ether, there's nothing real there, so it's easy to shuffle it about and make huge illusory gains - which then turn into real losses.
Whatever about crypto currency as a new unit of exchange, it went the old route of "people want to make money off this thing" and they found they could money by speculation on it more than by using it as a currency, so speculation became the way to make (and lose) fortunes.
People are genuinely surprised by the collapse of FTX. Yet there is no shortage of smart people who have been arguing that crypto is ponzi due for collapse. Nobody who calls themselves a rationalist should be surprised any more than a gambler who loses at roulette. The possibility of collapse was a well known possibility
I had an opportunity to buy Bitcoin in the very early days, and obviously I could have done so since then. But I can't escape my reasoning from back then, which is just as correct today. Bitcoin (and all "investments" that are only valuable because other people are buying them too) really are a scam. It's a pyramid scheme. There's no underlying value. If you make millions of dollars, which many people have, it's literally at the expense of someone else who put money into Bitcoin instead.
Modern government currencies are backed by, among other things, the very real value of not going to jail for tax evasion. If you do business in a nation, you must pay taxes in that nation's currency or you will be going to jail (or maybe just having all your stuff taken by the government). Regardless of the ethical questions surrounding taxation with or without representation, so long as governments *are* in that line of business, the stay-out-of-jail-for-a-price nature of fiat currencies is a thing of real value that guarantees a real demand for that currency.
Maybe not as much value as you and/or the government were hoping. But if someone offers to pay you in dollars, there is no risk that you'll be stuck holding a bunch of dollars when everybody else says "we now think that this was all just a scam and none of us want your dollars any more".
This is true most of the time, but countries' currencies have become almost worthless before due to hyperinflation. If US dollar inflation next year is 40%, you really don't want to be holding US dollars, even if the IRS remains in business.
Thanks for the strong version of the argument. But:
1. Technically you can pay your taxes with a debit card backed by crypto which gets converted to fiat at the last instant, so tax payers aren’t obliged to own any fiat outside of the last nanosecond before the deadline on April 15
2. Some large percentage of Americans have no tax liability
3. Some large percentage of Americans need bitcoin to gamble online or buy porn or all sorts of other of e-commerce that traditional payment processors look down upon. Not as big as the demand for taxes, but only a difference of degree. The USA is just a very large corporation that chose to accept dollars in payment for its services. Bitcoin will always have some of those albeit on a smaller scale.
Fiat currencies are literally centralized shitcoins backed by nothing and inflated at will by a central bank.
The dollar is just numbers in a database, with supply limited only by the whims of a handful of humans at the fed.
Bitcoin is just numbers in a database too, but the supply is tightly limited by an algorithm and a very strong consensus against ever changing that algorithm. That's a big improvement for the purposes of storing value over time.
"The dollar is just numbers in a database, with supply limited only by the whims of a handful of humans at the fed."
No, fiat currencies are supported by one of the most compelling aspects of humanity - violence.
The controlled application of violence is how they maintain stability, and until crypto currencies can secure themselves in the same fashion they will continue to be a pyramid scheme.
Currency is millennia old and one of the best economic coordination tools invented. Cryptocurrencies are rubbish. Bad as tokens of exchange, account or stores of value.
You sound like an economist "sure fiat currency works in practice, but it doesn't work in theory."
Not at all. Governments have two ways of giving underlying value to currency, the first, now out of fashion, is to pledge its exchange for a tangible asset, e.g. gold, silver, or even land has been tried. The second is to be willing to accept it in payment of taxes. Since pretty much everyone owes taxes, and taxes are usually the single largest expense of any wage earner, this immediately gives value to the currency: even if no one else will accept it, your single largest creditor will accept it in payment of your single largest debt. Even if you used BTC for every commercial transaction in your life, if the USG only accepted dollars for payment of taxes, you'd have to keep a big store of dollars around, and they would be valuable to you (and everybody else).
The same would be true for crypto -- if it were widely accepted as payment. That would give it underlying value. However, unlike fiat currency, there does not exist an enormous nearly universal creditor that could give it value all at once, shazam, for nearly everybody, the way a government can. So it has to build such acceptance one economic player at a time, and clearly that has the risk of powerful network effects, both helpful and (in this case) damaging.
Somehow taxes didn't prevent hyperinflation in any of the countries that had hyperinflation. So in what sense do taxes guarantee the value of a fiat currency?
Well, they don't, if you have a government that deliberately inflates its currency, and as far as I know there hasn't been a case of hyperinflation that didn't start off as a quite deliberate attempt to inflate away government debt. It just turns out to be hard to keep the fire under control once you start it.
I certainly don't mean to suggest that government can't *destroy* the value of a currency, they absolutely can, in a number of ways. I was just addressing the fact that government unlike current crypto currencies has the unique ability to *establish* the value of a fiat currency in one fell swoop, and that taking it in payment for taxes is how it's done.
Nitpick: sometimes rapid inflation results from an unplanned currency crisis due to import overreliance, as opposed to an intentional government plan to devalue sovereign debt. EDIT: Carl Pham correctly points out that these situations rarely, if ever, meet the common definition of hyperinflation.
I'll try to be nice: you seem to be parroting talking points (ie politically motivated "questions" which are disingenuous), while your question otherwise implies an ignorance of even basic economics.
So, to answer your question, taxes alone can't stop hyperinflation in situations where hyperinflation is going to happen; there are other ways to avoid hyperinflation. We see the US Fed currently raising interest rates to curb inflation, for example. This is an alternative to raising taxes. We could also just like, let inflation keep running at 8-10% for a while. There is no reason to expect hyperinflation in the US due to current fiscal or monetary policy and there is no indication to me that any political groups with *any* serious influence have plans to promote policies which would even threaten hyperinflation.
It doesn't apply to government currencies, because governments accept their own currencies as protection money. (If you don't pay the government, they'll take your stuff, and perhaps store you in an unpleasant place.) This is what gives "fiat currency" it's value. Because of this, everyone else accepts the money as valuable, because they can trade it to someone who needs to pay the government.
It absolutely applies to government currencies. The question is whether these governments value consistency (keeping inflation low) and how much we trust the government and economy to stay stable. In general for major countries, we have pretty good reason to believe that both of these metrics will do well, or at least remain in a a fairly narrow window. For comparison, you can look at the pricing history of Bitcoin (probably the most stable cryptocurrency) and see how wildly it fluctuates. Even that undersells how volatile it can be, since there is nothing (not even the governmental reputation government currency has) to back it.
There's also the question of whether we have a choice but to use government currency, which for most people is no.
You'll find that third world countries are not very good at forcing people to use inferior currencies consistently. Guns and law can only do so much. Black markets are inevitable when the black market offers customers a much better deal. It comes down to consumer choice, and one possible future is that the fed allows too much inflation and undermines confidence in USD and causes people to seek alternatives.
The internet and cryptography makes it virtually impossible to use violence to force people not to use crypto. Fiat will have to compete on the basis of consumer choice to some extent.
Space piracy question: using known physics, is it plausible to catch up to a fleeing space craft and board it?
The limiting resource for space travel is Δv. It scales logarithmically with the amount of fuel you bring, so while the pursuer will have more Δv, it seems implausible that they have ten times as much.
I will assume that the tech to detect ships over long distance is there, this should benefit the pirates. (Hard to do piracy in fog and all that.)
Capturing a spacecraft which has the bare minimum of fuel it needs for its flight plan is not hard: track it, figure out when it will do it's burns, intercept it at some other point in phase space (e.g., match both the position and the velocity at interception time).
This does not see like a stable equilibrium, however. Eventually the traders will carry their own spare Δv.
In that case, the trader will try to add maneuvers to avoid the interception point, and the pirate will do burns to keep up. What factor of spare Δv does the pirate have to have over the trader to succeed?
One assumption would be that the trader is traveling between mars and earth, and both a mars orbit and an earth orbit are safe havens from pirates. So the pirate has do to the interception somewhere en route.
If both start near to each other, it seems like an easy win for the pirate: they mostly have to match the velocity of their victim maneuver for maneuver, and just invest a little extra Δv to close the distance (and get rid of their relative momentum afterwards).
If they start further away from each other, the task seems harder, but I can't really say by how much.
I am also unsure if gravity fundamentally matters for the outcome or if it would be the same if the pursuit happened in interstellar space (where it is probably easy to calculate).
Also, the point of piracy would be to rob goods or claim ships and take them elsewhere than the destination the original owner had in mind. This seems to put limits on the economics of robbing bulky stuff like ice transports. Robbing stuff with a high price density seems more plausible, but these also seem in a position to have a high fuel-to-payload mass ratio.
From a delta-V perspective, catching up with and boarding the target is not the hard part. As you note, just a little extra Δv will do it. The hard part is getting away afterwards.
If the ship you just seized used half its Δv to boost onto a trajectory to Mars or wherever, and needs the other half to slow down when it gets there, then it almost certainly does not have enough propellant to change course to some asteroid pirate base and decelerate for rendezvous with *that*. So you're going to need to use your own pirate ship to carry off the cargo.
Which means your pirate ship needs to have the extra Δv to A: boost onto a trajectory that will intercept the freighter, then B: match velocities with a ship headed for Mars even though that's not where you want to go, next C: change course to Not Mars, and finally D: decelerate at Not Mars. If both ships have about the same propulsion technology and payload fraction, and the freighter uses it for the optimal trip to Mars, you're probably not going to be able to pull that off. If the pirate ship is much bigger or has much better engines than the freighter, you can probably do it, but then your ship is so much more expensive than the freighter that you probably can't turn a profit seizing the freighter's cargo.
And then there's the problem that the authorities will be able to watch all of this from halfway across the Solar system, so they'll either dispatch a punitive fleet to the pirate asteroid base or send a radio message saying "that's a pirate ship headed your way, we know it, you know it, you know we know it, so if you don't want a visit from a punitive fleet you'd best arrrest them as soon as they show up." Piracy really needs for there to be an "over the horizon" where people can't see what you're up to or where you're going.
Doing the piracy in space is a pain. Instead, bribe the harbormaster and hack the cargo loaders. Redirect goods to locations you favor while the ships are at rest.
The railgun needs to be fired from a fairly close distance, since dodging an unguided projectile is easy from a thousand miles away. So you still need to match orbits reasonably close before you can start threatening them with violence. Plus you need to match orbits in order to recover the cargo, anyway. It doesn't do any good to shoot down a merchant if their cargo drifts out into deep space afterwards.
Indeed, actually intercepting the ship seems superfluous to the goal. It would seem far more efficient to just shoot a missile at the merchant ship and then threaten to let it strike if they don't divert to your port of choosing, or at least ditch the cargo on a trajectory to your favour. Of course, that's a trick that only works until they start carrying missile defences, but even then an armed ship is always going to be at a massive disadvantage against an expendable strike.
Piracy only works if you have somewhere to sell the stolen goods. If every ship is permanently trackable by radar or ir any ship implicated in piracy will be put on a sanctions list and seized as soon as it tries to land, lest the spaceport harboring pirates get a visit from the space force.
Depending on the circumstances, Mars, or certain corrupt officials on Mars, might be willing to risk a certain amount of collusion with Mars-based pirates until the point that it risks serious retaliation from Earth. Much as Caribbean pirates often relied on a certain amount of collusion, or at least a no-questions-asked tolerance, with local officials.
But I suppose if everything could be seen from Earth and Earth has the means of unilateral enforcement, then there's not much that pirates could do (at least without collusion with authorities on Earth).
That collusion from corrupt officials in Tortuga or Madagascar or wherever, depended on the corrupt officials being able to maintain plausible deniability about the people they were doing business with being known pirates. If the authorities can track the pirates from a distance, and they can, then any port they fly to will have been told unambiguously that they are pirates and anyone doing business with them is an accomplice to piracy.
I can imagine excuses. "Our sensors were down for maintenance! It was a bureaucratic slip up!"
With an added layer of "We poor Martians don't have the privilege of being born on a planet that has water, a breathable atmosphere, and a self-sufficient economy. So sometimes things break down or fall through the cracks. Though there'd probably be less of that if you would send more supplies our way."
But I'd agree that piracy like this probably won't happen if Mars is ever colonized (which I consider a big "if" in itself). I'm really just spitballing here.
There's no practical way to board an aircraft in flight, because of the relative wind that *starts* at hurricane strength and winds up exceeding even an F5 tornado for the planes carrying the sort of cargo that would really attract a pirate's attention. In space, boarding is fairly straightforward if you remembered to bring a spacesuit.
I dunno. Ae we assuming the pirates sneak in, two by two, through the unguarded emergency airlock? Because otherwise there's a lot of ways, many pretty low-tech, to put a hole in a spacesuit. The pirates might be better off trying to put a hole in the freighter from a distance, to let all the air out and kill the crew. (Presumably holding them prisoner costs way more than you can expect in ransom for a nobody cabin boy.) But at least some of the freighter crew might still be alert enough to jump into their own spacesuits, with their guns holstered on the *outside*.
It has always been the case that piracy involved the possibility of battle, with sword and pistol and the unavoidable possibility that a pirate might wind up with a hole through which their precious life-sustaining fluid is rapidly escaping. This has historically not stopped piracy, because pirates have historically been willing to accept risk and have generally been much better at armed combat than freighter crewmen. Enough so, that many freighter crews didn't even put up a fight, because hoping for mercy gave better odds than hoping for victory.
Well, yeah, but that was in the days when 300 of you could jump over the gunwale all at once[1], and overwhelm the other crew in 60 secionds of mayhem. I'm just observing it's hard to do shock 'n' awe when you have to clamber clumsily into the airlock one or two at a time, cycle it...tum ti tum ti tum geez this takes forever...and then...I dunno, wait in the corridor outside for an hour or two, polishing your space cutlass and practicing your footwork, until your war party can fully assemble and storm the bridge.
Judging by the progression of major shipping operations to larger and larger ships with smaller and smaller crews, I can imagine a space freighter with a handful of crew and a very big cargo being standard. A small ship with a dozen pirates may be able to easily overwhelm them regardless of how few can get through a hatch at a time.
I can imagine gravity making a difference. At a suitably high tech level (or possibly even the current one) it might be easier to travel between two craft maintaining constant relative position in 0-G than in a planet's gravitational field, for boarding actions.
I've been writing a small bot for Telegram in Rust that messages me a few times a day to ask how I'm doing and record the result to a SQL file. I'm using it to build up to making another that acts as a middle-man between two people to filter and modify messages (for non-malicious purposes, more of a roleplay kind of thing). You could do the same for Discord or any other system with a decent API. I dunno if that qualifies as "software engineering" though.
Whilst it doesn't satisfy the ancient hacker saying, beloved by GeoHotz, that "before you hack, let your design pass through three gates; At the first gate, ask yourself, is it fun? At the second gate ask, is it useful? At the third gate ask, is it short?", creating a raytracer in one weekend[1] sounds fun.
Scott's recent post on unfalsifiable internal states made a passing mention of Galton's research on visual imagination, which got me thinking about the topic again and reading Galton's original paper on the matter (https://psychclassics.yorku.ca/Galton/imagery.htm).
One of two things has to be true. Either (A) I am somewhere close to rock bottom on this scale — I identify most closely with the response that Galton ranks #98/100 — or, (B) the people much higher on the scale are either miscommunicating or deluding themselves. The past century and a half of discourse on this topic has mostly been people higher on the scale patiently explaining in small words to people like me that no really, it's (A), and these differences are real and profound. But I'm not convinced.
I do have *spatial* imagination — the ability to hold a scene in my head as an index of objects with shapes, colors, and spatial relationships, and from there make geometric deductions. But to say there is anything visual about this imagination seems strictly metaphorical. The metaphor is a natural one, because humans derive spatial information about our surroundings mostly through sight. But when considering imaginary objects, it would be no more and no less apt to analogize my thought process to feeling around the scene with my hands and deriving information through touch.
Incidentally, I don't dream visually either. My dreams contain emotion, thoughts-as-words, proprioception, and sometimes pain, but I would characterize my spatial perception in dreams as just a dim awareness of what's surrounding me rather than anything visual, like walking in a dark but familiar room. The rare exceptions to this invariably are perceptions of written words.
I have no trouble accepting that there are certain commonplace mental experiences that are just completely missing from my neurology. I already know that sexual jealousy is one of those, and can easily recognize and accept that one because it has easily observable behavioral consequences: that I've been in a comfortable relationship with a polyamorous partner for seven years, while the vast majority of people run screaming from the notion of such a lifestyle. The reason I find visual imagination harder to accept is that it seems like this kind of evidence *should* exist, yet I've never seen it. The ability to visualize a scene in any literal sense, even dimly, seems incredibly useful and should have a lot of unfakable consequences! It should be easy to create a test at which anybody who has it to even a modest degree should be able to easily outperform me. Yet, on some tests that seem like they should work, I come out near the top.
I'm thinking, especially, of blindfold chess. I can play chess with my back turned to the board and just communicate coordinates with my opponent. I've even played two games at once this way, and won them both without making any blunders or illegal moves. Blindfold chess is by no means easy for me — it requires a lot of concentration — but I can do it and I've been able to do it ever since I was very young and a beginner at the game. Yet, most people, even most people who are better at chess than I am, find this ability almost unfathomable (lots of chess *masters* can do it, but I'm nowhere near that level). It seems like any degree of true visual imagination should render this task far easier. I don't understand how I can apparently be near the bottom at visual imagination, yet near the top in this skill.
This all leaves me skeptical that the differences in mental experience are anywhere near as stark as Galton claims. I think that the people who claim much more vivid visual imagination are communicating poorly and insisting that they mean their words more literally than they actually do.
What's the difference between spatial and visual imagination? When I talk about visual imagination I mean I can tap into the mental machinery that processes visual stimuli into a model without having the direct visual stimuli itself.
I "imagine" the Mona Lisa. What's her hair like? It's dark brown and very straight. Is it glossy? Kind of, there's some reflexions on the top of the head. Can I picture her with curly hair? Yes and it feels the same as when I just recalled the memory of the original picture. This I experience outside my field of vision.
It's definitely not the same as seeing with my eyes, but as for literally casting images into my vision I'd call that a visual hallucination rather than imagination.
If you read Galton's paper, a great many people seem to think that they can *literally* mentally cast images into their field of vision, and that these images have the same detail, richness of color, and field of expanse as what is actually before their eyes. I think that such claims are testable, and that testing shows them to be bunk.
I think there's no such thing as free computational power hidden in the depths of your brain.
My intuition (for whatever that's worth) is that if I wanted to visualize a mental image with the same detail as reality, then my imagination would be up to it except that I'd also have to be aware of all those details, and I can't keep that all in my head at once.
For applications like daydreaming, this doesn't matter. Our actual field of vision is also more of an illusion than we think. We focus on something and then we see more details. You can do the same thing with visual imagination, provided you either (a) have memorized those details ahead of time, (b) are willing to make shit up, or (c) a mix.
I suspect that for most people, it's mostly (b) with a tiny bit of (a), but brains don't exactly show their work, so unless you question it, the result doesn't feel meaningfully different from looking at what's in front of you.
I think we agree. Human vision is a mess and our visual field carries a lot less information than we would naively assume. Nonetheless, there's still a lot more information there than we can fit in our working memory. If you make me peer through a tube such that all I can see is the roughly 5° arc of foveal vision where everything is in sharp focus at once and read off a card placed at the end of the tube, you could fit many chess boards' worth of legible information on that card (a legal chess position can encode about 143 bits of information).
I have to agree with you. I am a visual artist and also on the low rungs of the scale (while my spatial imagination is also way above average). Many people who claim a visual imagination are surprised to hear that I am aphantasic, but many artists actually are. In fact an artist, particularly a realist representational artist, is acutely aware of how our brains fool us into thinking we see a faithful representation of the world, when in fact all it does is check a very few points against a pre-rendered model for discrepancies. If there aren't any, we can go on our merry way walking down the street without seeing any of the details, while we actually think we saw every cranny in the pavement - this leaves processing power available for searching for actually important information such as a tiger suddenly leaping in front of you, or simply recognising an acquaintance. You cannot take it all in at once.
Once you actually try to draw what you see, you realise that you weren't seeing *at all*, and that truly seeing instead of just looking, letting all the input in, is overwhelming and exhausting, and takes a good deal of concentration (you'll surely miss the gorilla while you were looking at a ball) and training. You then have to constantly fight yourself to draw what you see instead of the poor, low resolution, idea and edge based model in your head. To actually see in proportion, perspective, value, hue, reflected light, etc, etc, etc. It is actually a humbling and mind expanding skill that I recommend to everyone.
I believe that people with vivid visual imagination do exist, but are vanishingly rare. Watch a video of the late Jung Gi drawing, and marvel. That's what actual visual imagination looks like, no guide lines, no reference, just start drawing on one end of a huge piece of paper and end up on the other. If it would be that common, artists like that would be a dime a dozen; it took Jung Gi decades of dedicated training and constant practice to be able to do that, but there are countless artists who have done and do just that, most of which will never be able to come anywhere close. Even if the hand eye coordination is not there yet, a visually imaginative individual should be able to put down all of the details, however clumsily.
Our brain fools us while we dream, making us believe that to some extent, the dream is a complete and detailed "movie" with all the information in it, but I think what is being played is again this low res, concept based model, more related to touch than to sight (look at the drawings of little children, they draw what they feel, the contours and edges of things, as they recently first explored the world with hands and mouth as babies; and the ideas of things that later come with language and stories), and as in the waking hours, the rest is assumed to be there and not challenged, because in dreams there's no reality to check back against. To which extent you can fool yourself, both in dreams and in waking, would perhaps be indicative of where you'd land on this scale.
Here's maybe something concrete. I can take some funny-looking 3D object and "visualize" it: imagine seeing it in front of me from different angles. I can also imagine touching it and running my fingers over the sides to see what shape it is. These are obviously both metaphorical to some extent, because there's no real object there for me to look at or touch. But they are different processses in my head, so I'm forced to say that "I visualize this object" and "I imagine feeling this object" are both more than just "I can hold the geometric properties of this object in my head".
That being said, I, too, need to calculate coordinates if I'm playing chess blindfolded and want to check where a long bishop move ends up. But chess seems like an unusual test, because when I remember a position, I primarily remember the relationships between the pieces and not their separate coordinates. I feel like I would do better if I were better at chess; the places where I have trouble are places where one "chunk" of my model of the position needs to interact with another "chunk" that I was keeping track of separately.
Geometric properties can be discerned through multiple senses. But if I assign the scene I'm considering some property that can only be discerned visually, such as color, this isn't fundamentally different to me than assigning objects a particular texture or a particular odor. Yet it seems that other people go on about their "mind's eye" but never about their "mind's hand" or "mind's nose".
I totally agree that skill and familiarity with chess allow for "chunking" as you describe. I basically have a big dictionary of positional motifs in my head, and starting from a familiar motif and then filling in details allows me to compress the position's representation. It's easier for me to keep track of my opponent's position when I'm playing against someone at or above my level than someone far below it, because their moves make more sense and fit in with motifs that I already have in my dictionary.
That is interesting, I would definitely say I can imagine images and sounds far easier than smells and textures, which is why I'd talk about a "mind's eye" or "inner voice" but not like, a "mental hand" or "mind's nose".
I'm curious whether you'd say you can "hear" a song in your head, in a way that isn't just remembering the lyrics? If I'm familiar with the song I can easily recall the beat, tune and intonation. I'm genuinely unsure if your reaction is going to be "of course I can imagine music, that's different" or "what are you talking about, you're now saying you can hallucinate sound as well as images?"
I can recall sounds more vividly than images mostly because I can mimic them quietly to myself (as in physically, by tapping/muttering/humming/whistling) and compare those noises against my recollection. If I force myself to remain still and silent then auditory recollection no longer seems particularly different from recollection of other senses.
Well, the mind's nose is in any case a much less useful tool for imagining things :)
I was thinking less about color or texture than about shape. Take a piece from a Soma cube puzzle. I can visualize what it looks like, and I can imagine what it would feel like if I were holding it in my hand. Both of these carry the same information about the shape of the piece, but I feel like - because they're different internal experiences - they are not *just* metaphors for manipulating that information.
I've tried using this with relatively good results. Imagine a chunk of wood with a hardhat on, and low and behold, you remember to "inform your supervisor of the changes or note them in the log."
I have limited success at remembering things accurately; I'll get most of it right, but things like colors or height will shift around. A silver flashlight with a blue bulb gets remembered as a blue flashlight, but is otherwise correct in shape and size. And there was an event way back in the day where I was shocked to find a picture in my friend's house because I'd been seeing the exact face in my dreams multiple times. (Presumably I'd seen the picture before and forgotten.)
This claim is equally weird to me because when I remember visual images, I can't see how I could possibly describe it as anything other than visual. It's not like seeing it in front of me, it's pretty low resolution (except for the small region in focus), but I can imagine how all the colors and shapes fit together, and if I had the talent I could use the image in my head to create an artistic depiction. My dreams are pretty visually vivid, although again they're low resolution, only "rendering" the field of focus. I'm leaning towards this being an actual difference, although possibly we're just describing the same experience in different ways, I certainly feel that it's not metaphorical but actually the best description of the experience. If you tell me to imagine a specific object, I will picture it in my head in a very specific way, and if you later showed me images of that object I could say which ones looked more or less like the image in my head.
I would struggle with blindfold chess, not because I can't imagine a chessboard with pieces on it, but because my imagining is not a photorealistic 8x8 grid. I can imagine say, white knight at E4, but I can't do that for all 32 pieces simultaneously, and I'm impressed that people are able to hold all that information in their head (assuming some level of abstraction, but chess really is about the little details and I'd definitely get those wrong).
I'm not sure chess is a good test here. When imagining a scene, you choose where each element is, so if you don't remember each piece's position your imaginary chessboard isn't useful; you say you can imagine spatial relationships, and that's the most important element, so *would* we see a difference here?
A better test might involve something like "from this description, imagine which building would appear more fantastical" or something.
The bandwidth of my spatial imagination is nothing close to what I can get from vision. I can remember what piece is where, but if i want to verify the legality of moving a bishop from g4 to d7, I need to double-check the coordinate math to confirm that those squares are actually on the same diagonal, and then think to make sure that f5 and e6 are unoccupied. Having a board in front of me lets me do this at a glance, and even being able to look at *empty* board speeds things up a lot.
If I take the people in the top quartile of Galton's survey at their word, sufficient information should be available in their mind's eye to play blindfold chess just as easily as I can play ordinary chess. Yet the proportion of chess players able to play blindfolded at all is far, far lower than a quarter.
Memory and prediction are not the same skill.. Once you start moving pieces in a blindfold game, you're not recalling an image anymore, you're creating one. If you ask me to remember the ruler on my desk, I'll remember it pretty accurately. If you ask me to imagine moving the remembered ruler from point a to point b and measuring something, both the ruler and the room are going to start shifting and warping, unless I have a memory of doing this exact task before.
I concur. I rank very highly in terms of visual imagination, as far as I can figure it, and the difficult part of blindfolded chess for me is not knowing where pieces are in relation to other pieces (as in, is this bishop on a diagonal to that rook), but in remembering what pieces are where. I can equally easily imagine chess pieces in any number of configurations, which makes remembering where things are on the actual chessboard much harder.
Not that I've ever tried to play chess blindfolded, mind. Its just that remembering where the pieces are over many moves sounds to me like the hard part, recognizing the spatial relationship of pieces I am visualizing is simple. Translating that visualization to grid coordinates also sounds difficult: it would be a bit of a pain to count the rows and columns every time, but presumably with practice it would become easier.
Also, I've experienced waking visual imagination after a surgery (presumably, under the influence of some anaesthetic) and it was very different from the way I normally do spatial tasks. I remember hoping that the capability would be permanently unlocked, but nope.
Now that we are permitted to post frivolous ideas again, it occurred to me that Scott - instead of simply declining a Conversation with Tyler [Cowen] - might consider writing a satire as if one had happened. For example:
T: It's time for overrated or underrated.
S: If we must.
T: The mental health benefits of Ixtlilton. Overrated or underrated?
S: Ix...? No, wait. You can't fool me into thinking an Aztec god is a medication!
"I didn’t actually tell other people they should trust FTX, but I would have if those other people had asked."
Why? I hope this doesn't come across as an aggressive question, but I'm curious to understand what it was about the situation which would have led you to that conclusion. Was it based on an assessment of the people involved or of the exchange structure?
"I just subscribed to Astral Codex Ten" got SBF only 22 likes (23 is me, right now). Ok, Jesus had just 12. - Nice post. Makes me think of: All the smart and nice people who work(ed) for gov.-agencies/NGO supposed to do good, but really are often/mostly not - or mostly embezzlement of tax-payers taxes. I am still sorry I complained about the silly stuff I was supposed to do for the Goethe-Institut - which got me fired. I might have been able to spend some funds in a slightly less silly way. And I would have gotten me 200k in net extra-life-earnings. - At least I never taught at schools. Well, except, when I did. Can a good person work in a wrong-doing org/company? Ofc not, except when they do. Which is: all the time. “It didn't pay to trust another human being. Humans didn't have it, whatever it took.” "If I bet on humanity, I'd never cash a ticket." Bukowski, obviously. RIP
Why highlight the 40 million going to democrats only? A stronger statement and a better one for supporting the thesis of the point, i.e. "Most people everywhere were hoodwinked including in the political system" is in the article.
"Of that total, 92% has gone to the Democrats, with the remainder going to Republican candidates and campaigns. FTX co-CEO Salame favors the red side of the political divide, donating $23.6 million to Republican campaigns for the current cycle.
The top political contributor was billionaire investor George Soros, who has pledged $128.5 million to the Democrats. Billionaire venture capitalist Peter Thiel, who has backed several crypto startups, was ninth on the list with $32.6 million for the Republicans."
Putting the 40 million went to democrats makes it seem like they were uniquely vulnerable/compromised by ponzi crypto money when it's not the case. Especially now when the democrats are about to lose the house.
I am not an effective altruist and find it quite fraught with issues, however I agree with Tyler Cowen's assessment:
"I would say if the FTX debacle first leads you to increase your condemnation of EA, utilitarianism, philosophy, crypto, and so on that is a kind of red flag for your thought processes. They probably could stand some improvement, even if your particular conclusion might be correct. As I’ve argued lately, it is easier to carry and amplify damning sympathies when you can channel your negative emotions through the symbolism of a particular individual. "
The FTX debacle has had an infinitesimal impact on my assesment of crypto, human nevsriousness, or EA adherents in general given that I already believed it was bunkem. The excuses peddled, words written, and reasoning used to buttress EA post FTX have lowered my opinion and assesment of EA. All they hsve done is furthet reinforce my opinion that, to put it glibly, EA adherents and utilitsriand in general suffer from the same cognitive defect that a paperclip maximising AI wouldhave if it turned all humans into paperclips to make more paperclips for humans to use. As they say, it's the same energy, the two pictures are the same, etc.
If you don't see FTX as another decent sized condemnation of crypto you a complete fucking moron.
Literally one of the main critcisms of the space, since ~2012-2013, is something like the following:
Crypto is effectively an expensive, wasteful, poorly functioning database. Except with no admin which is controlled by "votes" among large players. No recourse to fix anything or track anything. The big virtue of crippling yourself in this way by using it is that it is "trustless".
Except...using it is overly technical for the vast majority of the population who will be left to place their trust in entities which are actually *less* trustworthy than the banks and states they are claming to want to flee due to lack of trust.
And this is yet another datapoint on that this is the exact dynamic. People worried about the hounds running straight into the arms of the hunters, and getting shot.
I am not a crypto enthusiasts and don't own any. Nothing about the FTX fraud requires it to be done with crypto.
Anyway the quote from Cowen isn't about whether crypto or ea or what have you is good or bad just that this single data point isn't useful for forming an opinion.
>Nothing about the FTX fraud requires it to be done with crypto.
Sorta disagree, the hype around the crypto-sphere is what enabled the FTX fraud. Yes it's technically possible to pull off the same kind of fraud with beanie babies, but it's not the 90s anymore. And the EA/rat community has perhaps played a significant role in lending legitimacy to that hype.
That said I sort of understanding where the Cowen quote is coming from. We're burning a scapegoat here and possibly updating too harshly in some ways. But I've long been a crypto-skeptic and I think the community needs some harsh updating in that general direction even if it comes on the back of a single point of data instead of more holistically.
See that is just rank idiocy. It is definitely useful for forming an opinion. It shouldn't be the only thing you consider, but surely the fact that it is a case where one of the main criticisms of crypto came true seems like it should give you pause. It is *some evidence*.
You and him sound like the goddamn bolsheviks "sure everyone warned if we dispossessed the kulaks there would be a famine, and now there is a big famine, but that is just one data point! It doens't mean ANYTHING!".
The idea that it isn't "useful for forming an opinion" is just sticking your head in the sand.
Not to mention which this is not data point 1 on this in the crypto space, but instead like datapoint 12.
I'm not convinced that the FTX situation is actually an example of the main criticisms of crypto. When people say "crypto is a ponzi scheme", they aren't saying that individual companies have a high likelihood of doing fraud. They're taking aim at the idea that the currency itself has any value. "The companies are actively lying to you about how much they hold in assets" is a critique I see coming from within the crypto community far more often than from outsiders. Stablecoin managers are constantly getting harangued to release public audits of their books, and clearly for good reason.
I think the ponzi nature of crypto was a significant cause of the disaster. The collapse of Terra-Luna seems to have been a major factor in Alameda requiring a bailout from FTX (as argued by https://milkyeggs.com/?p=175), and that collapse happened because Terra-Luna was a ponzi scheme.
Yes, there's a valuable distinction between "this token has no value except your ability to persuade other people to buy it, and eventually no one's gonna want to buy it" (Ponzi), "this token has few protections against theft or loss, far outside of its advantages" (the patio11 critic), and "the guy holding the tokens or maybe dollars for you is _in the current process of stealing them_" (FTX). FTX's specific behavior would have been a problem even had they been 'merely' a traditional non-investment bank!
(Possibly caught earlier, but then again, see Wells-"you wanted an extra account, right"-Fargo)
That said, there is a more general problem that crypto exchanges go bust for perfectly legal reasons on a pretty regular basis. The fraud here is getting additional publicity and scandal, but the EffectiveAltruism forum posts estimates that a little over 1/3 of 'committed' 2022 funding came through FTX or FTX Future Fund, and is talking about making sure people can pay rent, now.
Utilitarian offshoots obsessed with using resources for improving the lot of those decoupled from the generation of those resources is just a manifestation of entropy. It is a fundamentally against thr processes that lead to good human existence, and uses self wanking utilitarian ideology to justify thid breaking of selection mechanisms for biological, cultural, etc order that leads to human survival, prosperity, and reproduction. It does so by dressing it up as charity, decoupled from why charity might be adaptive, and is emblematic of the global end of history world as a single utopian village cognitive error.
Tldr; EA (and utilitarianism in general) is analogous to paperclip maximising AI risk but applied to human wetware.
That's before even getting into whether EA is sincere at all, its adherents are liars trying to status -launder, engage in FTX style shenanigans, etc.
One thing I would say is that it's important to distinguish the ideas and ideology from the movement and people claiming them, and as someone who believes strongly in the former I am deeply sceptical about large parts of the latter - I think that a lot of the "long-termist" things (especially AI risk) that people associated with Effective Altruism push are not actually effective altruism, and detract focus from short-termist projects like malaria and international development where an extra pound of spending probably does much more good.
I think that the large-EA Effective Altruist movement could fairly be called "invalid" (although that's not a word I'd choose myself) if it's not doing small-e effective altruism, and although some of it definitely is, I suspect quite a lot of it isn't.
I also think that there's at least a plausible line one can draw between the kind of high-self-confidence galaxy-brain thinking that leads to favouring AI over mosquito nets and the fall of SBF.
I'd say EA would be invalid as a movement if it turned out that the majority of EA adherents don't actually want to act in an altruistic way, or find out the most effective ways to do so. Instead they could have other goals, like learning to sound smart on the internet (mea culpa), or frauding intelligent, well-to-do people who lack defenses against social attacks from their own (perceived) ingroup, or quickly climbing the social ladder, or stinging other people's brain stems and laying eggs in their bellies.
So if thousands of people suddenly found themselves bursting into wasps, and a major EA proponent was found to be responsible, you'd update away from "EA is a good strategy that many people genuinely want to make happen" and towards "people who fund antihelmintics research and obsessively buy mosquito nets don't actually care about saving lives, they're giant wasps trying to reduce competition from the worm and mosquito people".
Is this the case? Personally, I don't think so. I believe the average EA proponent genuinely cares about other people, would sacrifice some personal comfort to help them, and is interested in knowing how best to do that. I also believe that people who deviate from the as-stated EA norms are a minority, and that most deviations result from morality creep instead of blatant bad-faith actors. I'll continue trusting them.
EA adherents turning people into wasps if it happened to set the value in a utility metric higher is the kind of tbing I would expect them to make excuses for... "but but but but they are *happier* as wasps!" or something like that.
Yes, I don't think it can ever be effective. I doubt it is altruism.
I doubt both whether altruism is effective as the common understanding of the word might imply, and I also doubt whether the reasoning and underlying imied worldview of effective altruists can lead.to taking effective altruisitic actions.
On point 7 - I think the main anti-crypto claims made by hostile media outlets were that crypto is full of grifters, that lack of regulation makes crypto a dangerous industry, and that crypto is a series of ponzi schemes. And I think the first two of those claims do an accurate if not precise job of explaining (predicting?) what went wrong at FTX, and you could make a case that the third one does too.
A question for guys who make "crypto is sound money/could be the next gold standard/would stop fractional reserve banking/reign in the central banks/more stable than fiat currency" type arguments. Has FTX lowered your confidence?
If SBF was investing his customer's assets and covering the difference in account balances by minting his own token, isn't that basically fractional reserve banking? The whole thing looks a lot like an ordinary bank run to me.
I work in the field. FTX has not affected my confidence in the soundness of the _technology_ I personally work with/on, which is always the part that has been of greatest interest to me. But it has lowered my confidence that it is possible for the _industry_ to function in a way that is good for the world, yes.
(EDIT: I guess I'm not really responsive to your question, since I'm not a crypto-goldbug or whatever.)
Everyone in crypto knows the entire "industry" is scams stacked on top of each other.
I'm quite impressed at the illusion of a legitimate financial instution FTX managed to create, despite being a bunch of autists abusing stimulants on the Bahamas.
It appears that the proliferation of crypto is based directly on the recent low interest rates and free money floating around. There's been too much money in the system for a number of years, and lots of people chasing investments when returns are dropping. It seems quite likely that all of these digital currencies are in danger of sudden collapse if the monetary situation changes. This seems likely in the next year or two.
The problem with treating crypto like gold or some other type of more normal investment is that there's no actual good that can be held. Sure, gold can lose a ton of value as well, but you at least still have a metal with some inherently useful properties. With crypto you have literally nothing if the value bottoms out.
I'm agnostic on the main question, but no; the FTX thing has zero influence on my view. It was a centralized business; none of its faults are due to the tech of ETH.
This. If someone invents an (almost) unbreakable material, and someone else starts a company where they supply locks made from it and hang on to the keys for you, and it turns out they were selling the keys to burglars, that doesn't affect my confidence in the structural properties of the material.
I mean, you don't have to go through a centralized exchange to engage with crypto. People do it because it's more convenient, but there's nothing stopping you from using DeFi, or practicing self-custody.
Yeah -- and I don't see a counterfactual world in which that statement is not true.
The core idea of crypto (or at least one of the core ideas) is to have tech that does away with the need for trust. This applies to various things; if I hold ETH in a personal wallet, I'm not relying on any other people. But it doesn't apply to centralized exchanges.
One should also point out that as far as I know, there's really no reason to ever leave funds lying around on a centralized exchange. You could and should use them only briefly and then transfer your funds to a personal wallet. (And also worth noting that it's possible to avoid them altogether.)
Don't you need high-speed low-cost low-latency Internet for crypto to have any value? Does that kind of Internet run on trust at all, or require government, or could we imagine it arising spontaneously in a libertarian paradise or anarchy?
You could say this exact same thing about money/fiat.
No need for trust just make sure each day you cash out into hard goods! People don't do that because it is super inconvinent. Ditto crypto. The "trustless" "feature" is not a feature 90%+ of the users are going to be able to take any advantage of, and they are just goign to end up trusting even more shaky institutions.
It's a monoamine neurotransmitter reuptake inhibitor which is selective for serotonin (as opposed to dopamine and norepinephrine). Serotonin / dopamine / norepinephrine are all structurally similar, and drugs tend to have activity on all three systems.
Generally, my surprise at "another crypto outfit goes down in flames among accusations of fraud and deceit" is on the "time to get more popcorn" level. Apparently, the demise of this particular outfit hurts genuinely well-meaning people, not just the usual fools who have not bothered to watch "Line goes up" yet, which is unfortunate. But to me, the big question is, who's next? If, as Scott describes, one company managed to hide behind an altruistic window-dressing and almost achieve regulatory capture - what are the other timebombs that are ticking in the US and European economies?
I wouldn't be too surprised if Elon Musk's empire collapsed next (I feel there has been a shift in public perception from "tech wizard/ genius entrepreneur" to "Bond villain/ bumbling fool", which may make it harder to pull off more stunts). Who/ what else?
If a 'line goes up' company collapses it might leave nothing but a crater behind. Maybe some software and servers to run an exchange. If Tesla collapses the floor isn't the floor way higher than that? Another company will surely takeover or repurpose all those factories and warehouses and the tech running them. I'm a giant Musk critic but at least most of his companies are making real stuff.
I wonder how much Tesla's customer driving data is worth by itself? Any company that wants to compete in self-driving would need to capture a similar dataset.
SpaceX certainly depends on government contracts. If there were a timebomb it would probably be that one. I think the government would happily bail them out though. Seems like a no brainer in national security terms alone, no matter how unprofitable, we need to have some launch capability in the US as insurance at the very least.
SpaceX should be fine. NASA has a clear interest in keeping them running and they are probably reasonably (but not very) profitable. The real question is around Starlink, since Musk has said it isn't profitable and he has sunk a lot of money into building it. It's not really clear to me that it will ever be profitable, and that's something that will create turmoil over the next few years for Musk.
SpaceX could probably spin off Starlink if it came to the worst, and then continue running on NASA contracts. I'm much more skeptical about things like Starship, however. I think the demand just isn't there, and in any case the rocket doesn't make a lot of sense for orbital applications. If it ever works I suspect it will only fly a handful of times per decade.
It would reduce the cost of launching the satellites. The satellites themselves would obviously still be kinda expensive, Starlink would still have to launch a lot of them just to replace the losses (15-20% per year), and they'd still have to get a lot of customers to dish out 100$ a month... and I suspect that the regions with a sufficient density of potential customers, but no fiber-optic connections available, will keep shrinking in the next years.
I think musk's companies (at least Tesla and SpaceX) are net profitable now? I wouldn't be shocked if their stocks crash and Musk personally declares bankruptcy, but I'd expect Tesla to keep existing and producing cars under new management.
From a cursory web search: SpaceX is a private company, they don't have to publish info on their financial status. They SAY they are profitable, which means exactly nothing. They apparently make profits on commercial Falcon 9 launches, but whether that outweighs the money they invest into Starlink and the development of Spaceship is doubtful.
"his hedge fund can make money by taking risky leveraged positions, but it has to raise funds, and that's not cheap. And his exchange can make money by charging fees on transactions, but although that can be a nice slow steady income, it's not going to make him the trillions of dollars he wants.
But Joe's spotted an opportunity. The exchange has lots of customer assets that aren't earning anything. If he puts those customer assets to work, he can earn far more from his exchange customers. And he's got an obvious vehicle through which to put them to work. The hedge fund. If he transfers customer assets on the exchange to the hedge fund, it can lend or pledge them at risk to earn megabucks.
Of course, there's a risk that the hedge fund could lose some or all of the customers' funds. And the exchange promises that customers can have their assets back on demand, which could be a trifle problematic if they are locked up in leveraged positions held by the hedge fund. But this is crypto. There's an easy solution. The exchange can issue its own token to replace the customer assets transferred to the hedge fund. The exchange will report customer balances in terms of the assets they have deposited, but what it will actually hold will be its own token. If customers request to withdraw their balances, the exchange will sell its own tokens to obtain the necessary assets - after all, crypto assets, like dollars, are fungible. "
What was the legal status of the assets held by the exchange? Were they considered bailments or deposits? If they were considered deposits, would that not turn the exchange into a bank? Was it regulated as a bank and did it have a reserve requirement?
Thanks for this. I feel like I’m much closer to understanding this implosion than I was before!
What I still don’t understand: This seems like it would only work if they were conjuring tokens out of nothing and declaring them to be fixed to a dollar value (otherwise their assets would fluctuate with the value of the token, potentially below the 1:1 replacement value that allows for fungibility.)
Was there a market for FTT or were they just declaring what it would be worth?
They were basically conjuring FTT out of thin air, but FTT were thinly traded enough that they could control the price and peg it to what they wanted without much expenditure of resources. Except when suddenly everyone wants to trade their FTT in for something with real value the problem becomes you don't have the resources to back up the FTT.
Just typcial ponzi scheme stuff. I can give you an "IOU" for $4,000, and just pay off the people who stumble in asking for their $4,000 one at a time, even with just a small cashflow of a few tens of thousands of dollars a day.
But if everyone starts coming and suddenly I need $400,000 today (or $4,000,000,000) instead of $16,000...I am screwed.
This is good, but this assumes that this was the plan all along and the entire organization was in on the fraud.
The theory most people in the space somewhat believe is the following:
1. Alameda takes huge losses during the LUNA saga.
2. For reasons, SBF and the core exec group of FTX decides to bail them out, using a back door to lend then billions in FTT tokens.
3. Alameda then deposits those FTT tokens and borrows against them in the tune of billions, to get themselves out of the hole.
4. It all comes crashing down when the balance sheet leaks, CZ tweets that Alameda holds a lot of FTT and plans to sell his own, and rumors spray quickly that CZ's firesale of FTT tokens might affect FTX solvency, creating a run on the brokerage.
"This is good, but this assumes that this was the plan all along and the entire organization was in on the fraud."
I don't know if this was the plan all along, but if it were, I don't think they regarded it as fraud. First, from what I'm reading, only a handful of people really knew what was going on with all the juggling around of funds etc., everyone else just did their jobs and did what they were told.
Second, I don't think Bankman-Fried and his trusted handful thought of things like this as fraud, they thought of it as "high-risk, go for the 51% chance as the game theory maximisation of utility one-boxing strategy teaches us" - they relied on being Very Smart and Rational(ist) and that this was a bunch of funding just sitting there doing nothing when it could be put to work and earn huge returns and Do Good. It would never go wrong, because they were Too Smart for things to go wrong. Maybe the dull boring red-tape bureaucracy said this was a big no-no, but hey: Silicon Valley, move fast and break things, this is the new generation so get out of our way grandpa, the future belongs to us and anyway you can't even understand how crypto works so how are you gonna tell us what to do?
It's Kipling and the gods of the copybook headings all over again.
Regarding the gods of the copybook headings, doesn't Kipling explicitly contrast them against the gods of the marketplace?
In which case it seems likely to me that the gods of the marketplace were the authors of both the rise and fall of FTX and that the religious platitudes and folk wisdom of the gods of the copybook headings had very little to do with it.
Of course I never did properly understand what Kipling was getting at with that poem. Perhaps I should read the Wikipedia article on it.
Well maybe it wasn't fraud in their heads, but a brokerage lending client funds to another client is fraud in most jurisdictions.
Lending assets through a back door so that internal compliance and audits cannot flag them seems like fraud to me.
Finally, the risk management process of providing lending value to your own token is very irrational. Maybe in their heads the end justified the means but boy did they do some pretty poor risk calculations.
The lack of attention paid to global traffic deaths is genuinely insane. In the US alone, 30K annual deaths and countless serous injuries is genuinely insane.
As others have amply noted, lots of people pay lots of attention to traffic deaths, and have done great things in making driving safer.
We have *also* paid lots of attention to the *benefits* of the unparalleled personal mobility provided by widespread use of automobiles. Aside from a relative handful of extreme WEIRDos in places like San Francisco and NYC, we've all decided that these benefits are worth the 0.01% chance of being killed in a car accident next year. If we can reduce that to 0.005% or 0.001%, great, but we're not giving up any of the utility to get that.
If you disagree, fine. We're not going to rearrange the world to your tastes, but we're also not going to force you to get in a car or live in one of the places where driving is a necessity.
So I take it you're a big supporter of repealing all parking mandates and raising the gas tax to $4 or so to account for the full externalities of driving, so that the rest of us aren't subsidizing you anymore?
Hmm, what about the subsidy you get from those who drive? Who delivers the groceries to the grocery store in a big truck so you can cycle down there and pick up all you need in your backpack? If the electricity goes out on your street, are the workmen going to take the bus to get there? How will they bring their tools? Are you willing to fork out much higher prices at the fast-food joint so that the line workers can buy houses in your neighborhood instead of commuting (by car) from where it's cheaper?
Driving isn't ubiquitous because everyone loves cars, or from perversity, but because it provides enormous economic advantages, both personal and collective.
Nope, it's the cause of trillions in government subsidies and laws requiring car-oriented developement going back to the Eisenhower administration. Of the things you listed, some aren't required and those that are could simply pass prices on to the consumers - zero reason to do it through inefficient government mandates and subsidies.
Huh. Well, OK, if your belief is that vast conspiracies account better for the current state of affairs than a few billion people[1] making the decisions that are economically and personally advantageous for themselves...hmm, not sure what to say. You're in good company, of course. Many people think any number of aspects of the current world are the result of fiendish and shockingly broad and long-lived conspiracies. It's a point of view.
---------------------
[1] I assume you can rationalize the fact that your conspiracy took root all over the world, i.e. was not restricted to the Eisenhower Administration in the United States, and was effective everywhere from Jakarta to Timbuktu.
That's... Not remotely true? The US is a massive outlier on car centricity. And it doesn't require a conspiracy, city planning paradigms vary by era and the late 20th century top-down planning paradigm was pretty car-focused before most places realized the downsides and started walking away from it.
I actually don't drive much, primarily for this reason. I've still got a learner's permit when I could have gotten a license a decade ago, because any time I'm on the road, I'm wondering whether I'll contribute to the enormous number of deaths.
Obviously just not driving isn't a scalable solution, at least until self driving cars take over or public transit vastly improves. In the short term, I'm not sure what we can do.
Umm it is something car companies and engineers, and the regulatory envionrment spend literally tillions on. I don't think there is a lack of attention.
it is jsut a hard problem, and also you know, everyone dies. Constructing a society where no one dies except by old age is not desirable, and would in fact be miserable.
The trillions are spent on improving capacity, not saving lives. If engineering was placed on saving lives, you'd see bollards on busy street corners instead of breakaway street light posts.
Actually, quite a lot of attention is paid to traffic deaths. I'm in my early fifties, and since I was young things have changed significantly: we have graduated licensing of drivers now, cars are sturdier and have systems like air bags to protect passengers in the event of a crash, and we no longer wink at drinking and driving. And as a result it is safer to travel by car than it used to be a generation or two ago.
You're discussing reducing deaths *rates* by one half while barely decreasing the actual number of deaths... this is wonderful mathematical thinking that doesn't acknowledge that *almost none of these deaths would happen with changes to our transportation system.*
I've been in a couple of car accidents (thankfully none that have killed anyone, but there have been some serious injuries), and I used to pick up dead bodies from car crashes as a job, so I get what you're saying. There absolutely is a huge human cost.
But I think you're failing to appreciate just how important cars are. You can increase the use of public transport on the margin, but going down to "almost none" is not realistic.
I live in hope someone actually manages to figure out safe self driving vehicles, but until then, buckle up and be careful.
Again, you can have cars and *extremely low* traffic death rates, just look at Sweden, Norway, or even Ireland. Hardly anyone would argue that cars aren't a major part of their society. They have different infrastructure, and driving is a privilege, not a requirement.
If you have a proposal for how our transportation system should be changed, feel free to present it. I expect quite a few of us will be ready to critique it.
I think automobiles are an effective tool. When people are using an automobile to go to the grocery store, we've gotten to the point that we are needlessly putting people in harms way for no reason other than a slight convenience.
Groningen, Netherlands is an model city for transit reform. We needn't completely abandon the car to make it extremely safe (lord knows our highways are *extremely safe*), we just have broken our towns and cities and made them extremely deadly by prioritizing the automobile over safer modes.
When the number of people dying is both large and young, the number of life hours lost significantly outweighs other highly mortality events.
Things have improved in some ways, but this is partly counteracted by cars getting bigger - this is slightly safer for the people inside the car but way more dangerous for people hit by it.
(And while the US has improved on this, the improvements stalled out 20-30 years ago, which is why it used to do better than Europe but now does much worse).
About twice as likely to kill a pedestrian, conditional on a crash happening. Crashes are also more likely to happen, since SUVs have worse visibility for pedestrians
Which is, itself, a side effect of the federal government requiring fuel efficiency improvements that were impossible to meet - at least at a price consumers could pay. Therefore manufacturers made the midsized cars larger which meant they had a more relaxed mpg standard. This was also driving part of the used car price hikes - midsized cars were much rarer.
The problem here isn't fuel efficiency improvements, it's that they made light trucks exempt from them. The tax and fuel efficiency code should definitely be revised to make sedans more economic than trucks or SUVs.
Yep, same age group and interest here ;) Still, Germany got down from 20k human roadkills a year in 1970 to barely 3k now. So, the US is kinda lagging behind (drivers too young?). - UK safest. After Sweden, but Sweden is too exotic, right? - Public attention in Germany was: One short text in the paper once a year in 1980. In 2022: "tragic" crashes nearly daily on TV (lumped with news about celebrities).
Every state is different, granted. But halving the amount of fatalities is still less good than halving them thrice over in the same period of time. We did not do really revolutionary steps in Germany to reduce traffic deaths - still no speed limit on many miles of our autobahnen ;)
Isn't that slightly incongruous with the energiewende? It seems strikingly odd that lots of places in the enormous and energy-profligate US have 55mph speed limits. But in environmentally conscious Germany you can take your 5 litre Merc and blast up and down the autobahn at 150mph all day long. Am I missing something?
150mph? Well, at 250km/h the Mercedes will usu. stop accelerating (in-built feature) - but one can go 260 miles(!)/ph, too : https://www.youtube.com/watch?v=7pg1hhW5qhM the police checked the video and agreed: legit*. - Freie Fahrt für freie Bürger! (free citizens need FREE-ways).
In real life, A) a speed limit at 130km/h would not change much, maybe 2 percent less petrol in all.
(speed limit at 55mph/100kmh? That proposal will kill your political career.) - But B) "stretching" the exit from nuclear power also just added a few percent power. - the GREENS want A but not B, the "FDP (free liberals)" want B but not A. The lobby claims: consumers worldwide buy BMW/Porsche/Audi/MB because of the dream of racing our German highways with them. What's your take on that??
* "For those that say it was irresponsible and dangerous:
4:50am on Sunday / 10 cars per 10km so 1 car per 1 km
Good visibility is about 3-4km straight ahead so there is enough time to react.
The Chiron can brake from 400 to 0 in 9 sec. within 490m. All cars are in the far right lane.
There was an earlier drive through the section to make sure there is nothing on the road.
There is a fence along the whole stretch of the highway, so no animals can interfere.
3 people were spotting on 3 bridges for maximum safety.
If you still think it’s irresponsible and dangerous, well, we respect your personal opinion."
There is a very strong lobby of those who love to be able to drive as fast as they want. In the current government, they are represented by the FDP - the liberal democrats. Actually, the proponants of having no speed limit mostly associate this in public discourse with 'freedom'. Interestingly, compared to US for example, in Germany it's no problem to our 'freedom' to get fined for specific (really radical) things you say in public, or to of course *NOT* be allowed to carry a weapon ... but having a speed limit would *really, really* limit our freedom.
That's actually not a majority position any more, and there has been some public pressure to implement at least a temporary speed limit, while we have this energy crisis currently going on. But, as in first sentence.
Interestingly, many of our highways are so crowded, or so full of construction sites, that you actually can't drive that fast anyway. But having the theoretical option is all that counts!
PS: There is also the lobby of those who build very fast cars, of course.
I think that's overwhelmed by the much stronger effects of, "hey, the US is big, you're going to need to cover more ground," and "the US is wealthy, more people are going to buy cars."
Also, the intervention of, "we should just completely rewrite all physical infrastructure so that American cities and towns are small and dense" is a fun counterfactual but is not really a possible intervention.
Fortunately there's a huge range of easy improvements (raised crosswalks, speed bumps, red lights cameras, protected bike lanes in the many cities that do have reasonable density, TOD and upzoning near transit, adopting international best practices for transit, taxing oversized vehicles, etc) that don't require anything extreme like that, so this is pretty much irrelevant until we do all of those.
Is there any low hanging fruit for minimizing this risk personally? Working from home seems like it would have the biggest impact (my commute is easily >90% of my total driving miles) - anything else come to mind?
Slow down at intersections. Even if you are a perfect driver (and who is?), someone else can just blow through a red light without realising. Give yourself a chance to react if that happens.
Vary your commute. Unless you are drinking or texting, your next most dangerous behavior is probably zoning out because you've done this trip umpty times before and are doing the same thing that has always worked. Only *this* time, for the first time, someone is actually pulling out of that alley overgrown with weeds from which you've never ever seen a car emerge, and which you have long since unconsciously classified as "not a road."
There's some very interesting work out of Europe that shows that *removing* road signs improves safety, which suggests that a dulled attention due to assuming you already have all the data is a significant factor in accidents.
Bear in mind that an increasing number of the deaths can be attributed to 'people not doing what they should' which has implications for the results of creating even more rules.
This is why road design (and potentially automated enforcement, and maybe in the future things like speed governors) are better than just making better rules.
I'm sure the "crime should be legal" style left (which is unfortunately large in the coastal cities where this is most needed) would object on these grounds, but automated enforcement is generally much less racially biased (it literally doesn't see race), so I think the main actual objection would be from drivers who like speeding.
You are assuming that that bias is in the cops actions and not in the difference in driver action. Also related: if a car is driven by someone not the owner, it is not right to ticket the owner, which is what happens in automated systems. Finally, none of this addresses the issue that a ticketed person must still pay the fine when it arrives in the mail.
A) I'm fine with people getting more fines if they really do commit more crimes
B) it's completely reasonable to just say the car's owner is still responsible. If you let your friend drive your car and he got a ticket, it's on you to make him pay you back.
C) no, this is one reason we still need cops as well as automated enforcement.
The fact that it is "less racist" isn't going to protect from heavy attack for racism you when the results turn up that speeding tickets are 50% black 50% white in a 15% black area (or whatever the numbers would be).
Also I don't think it is clear it is less racist. Some of these automated systems have come under attack for racism specifically because some current enforcement methods are anti-racist through design or happenstance (less policing in black areas, or officiers actively trying to not have all their tickets be to one race).
Yeah the racial component brought me around on automated systems. Also that it's kind of a waste to have expensively trained police doing traffic enforcement when they could be doing more effecting policing.
I think automated enforcement makes sense, but drivers tend to adapt to it, so you still need some cops doing traffic enforcement, at least for really bad behavior. But yeah, having 95% of the speeding, running a red light, passing a stopped school bus, etc., tickets issued by machine instead of by a cop seems valuable--let them worry about the other 5% where someone's driving drunk or drag racing or something.
Yep. I think it's getting a bit more attention the last few years with the new urbanist yimby movement (if still way less than it should), but that could also just be my bubble.
Heavens, what lack of attention? I can think of relatively few things that, for the carnage they cause, have received *more* attention in the last 50 yeras. We got seat belts, and then air bags, and crash testing and all kinds of improvments to vehicles -- collapsible steering columns, engines that "drop" on impact, roll cages -- we got mandatory seat belt laws, and much stiffer drunk driving laws, more rigorous requirements on teen licensing, and phone use while driving laws, and now insurance companies are even hawking apps that will monitor your driving and reduce your rates if you drive more safely.
Presumably all this attention has something to do with the fact that automobile fatalities per capita have dropped 50% in the last 50 years. We should be so lucky to have something like that happen to e.g. drug overdose deaths, which according to the CDC top 100,000 a year lately, 2-3x more than traffic deaths.
Cars have gotten more safe over the years, but I think illegal drugs have gotten less safe thanks to cheap fentanyl being available and hard to dose properly. (I mean, a pharmacist with a proper setup could dose it properly, but a semiliterate biker mixing up the drugs in a trailer probably can't.)
I mean, unless the buyer is asking for fentanyl, the proper dose is zero fentanyl, which is easy enough. The problem with the semiliterate biker is not his lack of pharmaceutical chops, it's his reluctance to give the customer what they paid for.
I'd add that there's an awkward truth hiding here that is even more interesting:
If we really wanted to reduce total traffic deaths to zero or at least get closer to that number, it would be relatively simple to have a law making it illegal for car manufacturers to make cars that go faster than say 80 kmph or something like that. Exceptions could be made for ambulances and law enforcement, maybe even for trucks and other pieces in the logistic funnel etc in order to keep the economy running.
Just by crippling all modern personal vehicles we'd literally be saving millions of lives overnight.
The fascinating truth is that society seems to be willing to sacrifice a pretty large number of people for the sake of convenience. I'm not even sure its the wrong choice. Getting places quickly by car is a huge part of modern living. But its not a choice anyone seems conscious of making.
Have you ever read The Left Hand of Darkness? On the planet where the book takes place, cars go as I recall only about half as fast as on Earth, not for any particular physics or engineering reason, but just because the inhabitants, unlike us, simply assumed that safety was more important than convenience.
Interesting that someone was thinking the same way as you over fifty years ago. Perhaps since then we've all just gotten so used to fast cars that it's become harder for us to notice the cost.
>we'd literally be saving millions of lives overnight
Extending lives, zero of those people would live forever. Sounds dumb, but actually an important consideration in broad scale public policy discussions.
OK I'd love to understand the policy implications of considering it "saving" vs "extending".
One example I can think of is COVID since the vast majority of mortalities were very old people. So even if a lockdown saved X people, that's a lot less extra hours of live saved than stopping X fatal vehicle crashes, since to the best of my understanding many more traffic victims are very young.
I mean, sure, you can adopt a definition of "saving lives" that makes it so that no (human) life has been saved, ever, and that it's decently probable that none ever will, or you can actually use words in accordance with common usage.
Well the difference between "Saved lives" and the reality of what is happen is actually pretty important in public policy discussions, so ti is worth being pedantic about it, and fuck "common usage".
Things like heat waves and hurricanes and such often might "kill" 60 people, but almost all those people were on life support or have very serious medical issues and were goign to die soon anyway.
Plus "saving lives" has an emotional appeal that sort of wrecks the reality of the actual calculus of the situation. Which is you are just pushing back the date of death. We are all on the same conveyor belt to nowhere.
In Sweden there is a "Vision Zero" (https://en.wikipedia.org/wiki/Vision_Zero) aiming for zero traffic deaths. Of course it is not working in a literal sense. But last year Sweden had 192 traffic deaths which, considering Sweden's 1/30th of US's population, would translate into 6k American traffic deaths. Point is, a lot can be done if there is will. And in some places there is sligthly more will than in other places.
Agree. About a million people a year killed on the roads.
For me there's a big contrast to those who want to doom-monger about climate disaster deaths, which run at around 20k per year (6k last year). Also, for as long as records have been kept, the death rate on the roads has been going up, and climate deaths have been going down! And yet the freaking out is invariably about the effects of floods and storms, not the 100 times worse killer - motor vehicles.
The *rate* has been going down for 50 years. Do you mean the absolute numbers? But even those have been declining in the United States for the past 25 year or so:
Yes it is. One wonders if it's accident or there is some reason. Maybe people have a threshold "this is a problem" and as long as the deaths per capita are below that level not much happens. (The population has been growing only very slowly over that time.) I think something like that is true of influenza. It actually kills a fair number of people, but sort of in bursts, and people get worked up over it in a bad year when it snuffs 60,000 grannies, but then slowly forget about it over the next 5-6 years as it fades below the "threshold" -- or so I surmise.
Yeah something like that may be it. 5-10 years ago I was hearing a lot of lamenting of deaths (in young people) due to painkiller overdoses (oxycontin etal.). And we sorta dealt with that. There also must be a size effect. If ten or one hundred people die at once it's somehow a bigger deal than one person dying in one hundred different instances.
If it's any consolation, sometimes you just get a combination of factors that make it impossible to avoid a disaster. Even good internal controls in a company can be subverted at the top, and there were too many factors converging in this case to create a situation where SBF had no real accountability or guard rails on misconduct except insolvency and reputation loss.
Investors in 2019 had massive FOMO about missing out on the Next Big Publicly Traded Tech Company, and there were plenty of dubious firms getting big money with no accountability because if you insisted on accountability . . . well, there was another investor willing to step up, and what if HE got your big Facebook 2.0 stock payday?
Best thing you can do is try and guess whether it was really one of those circumstances, and diversify your risks and hopes.
I am terribly sorry Scott. But, I have stayed as far away from crypto as I possibly could and have told everyone I know about the problems I see.
I spent a lot of time working in the financial business and have been deeply involved in regulatory and other back room and plumbing issues. I knew and tried to warn people that crypto does not have the the systems and the people to run them that conventional financial institutions do. And those institutions are not spotless. but, there are so many overlapping (FRB, FDIC, FINRA, PACOB, NYSE, FASB, ...) regulatory authorities and pots of money that most people can spend their lives being FDH (fat, dumb, & happy) about their banks and brokerages.
This is not true with crypto. In 2008 several of the USA's largest financial institutions failed and a bunch of others were very close to going under. But, very few people actually lost money.
Crypto has none of those systems or back-ups.
You are not smarter about money than Warren Buffett, Charlie Munger, or Jamie Dimon. I know I am not. They all said crypto is not good. Believe them.
I think the people who are insisting that utilitarianism/consequentialism doesn't really tell you to violate commonsense moral constraints against lying/cheating/stealing when the upside is high, and that someone who engages in fraud to get billions of dollars to spend on effective charities is misapplying the view even if they were weighing the risks as well as they could, are not really owning the implications of their theory.
Yes, you should have some mistrust of your own ability to measure consequences, and that might give you utilitarian reasons to cultivate a tendency to e.g. keep promises even when your best guess is the alternative is a little better. And maybe that means "in most normal situations (or 'the vast majority of cases') following the rules is the way to go." But this kind of consequentialist justification for a buffer zone only goes so far, and when billions of dollars of funding for the most effective charities (and therefore millions of lives) are at stake, we are outside the buffer zone where the very general version of this point applies. The plausibility of this kind of deference to commonsense norms in cases like "should I cheat on my significant other if I calculate the EV is higher?" dissipates when the stakes get higher and higher.
I know why they wouldn't want to say it out loud, but I think what the utilitarian should think is "if someone really has good reason to think they can save millions of lives by defrauding (mostly relatively well-off) people and can get away with it, they absolutely should. If SBF was reasoning this way, then he made a mistake, not because he didn't respect simple moral prohibitions, but because he overestimated how long he could get away with it and underestimated the social cost to the movement he publicly associated with." True utilitarians really are untrustworthy when lots of utility is on the line! And they should own that consequence!
And rule utilitarianism is not a card that just any utilitarian can pull out in response to these cases - rule utilitarianism is a fundamentally different moral view - a much less popular view than act utilitarianism, with its own set of (quite serious!) problems. Most consequentialists in the EA movement are not rule consequentialists, and they can't just whip its reasoning out at their convenience - they would have to give up their moral view.
> But this kind of consequentialist justification for a buffer zone only goes so far, and when billions of dollars of funding for the most effective charities (and therefore millions of lives) are at stake, we are outside the buffer zone where the very general version of this point applies. The plausibility of this kind of deference to commonsense norms in cases like "should I cheat on my significant other if I calculate the EV is higher?" dissipates when the stakes get higher and higher.
I don't buy it. If you're at the point where you have billions of dollars to donate to charities aligned with your values, and then consider whether to commit a crime, you stand to gain billions to donate if your crime works out, and to lose billions to donate (plus immense reputational harm) if it doesn't. Plus as Scott points out, the utility of donating money is usually sub-linear (i.e. the first dollar goes much further than the 100 billionth), so even if you had a perfectly legal method with a 50% chance to double your money and a 50% chance to lose everything, that would already be negative EV.
So I just don't see how increasing the stakes is supposed to make integrity *less* important rather than *more*.
Also, this stuff is not new; Scott linked to a Yudkowsky essay from 2008 (!) on the topic. That said, while LW definitely stressed this point a lot, EA may have done less of it? From a recent Yudkowsky post on the EA forum:
> I worry that in the Peter Singer brand of altruism - that met with the Lesswrong Sequences which had a lot more Nozick libertarianism in it, and gave birth to effective altruism somewhere in the middle - there is too much Good and not enough Law; that it falls out of second-order rule utilitarianism and into first-order subjective utilitarianism; and rule utilitarianism is what you ought to practice unless you are a god. I worry there's a common mental motion (in some-not-all people) between hearing that they ought to ruin their suit to save a drowning child, instead of taking an extra 15 seconds to pull off the suit first; and people thinking that maybe they ought to go rob a bank, to save some children. But that part, about the psychology of some people who are not me, I'm a lot less sure of.
---
Finally, it's much harder for the EV calculations to come out positive if you actually care about other people or x-risk. In that regard, purely selfish people (e.g. sociopaths) have it much easier to do ends-justify-the-means calculations with positive EV, because they can more easily discount the harm they cause.
And how many examples do we have of not-purely-selfish people successfully making accurate ends-justify-the-means calculations, anyway?
Again, if the claim is that utilitarianism implies SBF shouldn't have committed crimes because risk of discovery, reputational damage, etc. makes reasonable calculation of expected utility negative, that's fine - I agree. That's not the claim I'm challenging - it's literally what I say the utilitarian should think. The claim that I'm challenging is that utilitarianism implies you ought to follow the rules even when your best calculation is that the expected value of breaking them is much better (by say, millions of lives). The utilitarian reason for SBF not to commit fraud, in other words, depends heavily on precisely the details of his case that bear on calculated EV, and not that good utilitarians should override large differences in calculated EV with simple deontological rules.
A significant problem is that the rule of "follow the rules" depends on assessing that the rules are just and moral. Frequently there's a lot of evidence that they aren't, but it's often still a good idea to follow them. But how do you decide?
A significant chunk of the LW sequences is about decision theory and stuff like the Prisoner's Dilemma and Newcomb's Problem, all of which require a certain kind of internal lawfulness and integrity, with real-life analogies like being a predictably good citizen / trading partner, etc. Same with being truthseeking, not succumbing to reasoning errors, etc.
Everything I read, leads me to believe people only see Twitter for it's face value.
Am I the only person who thinks Elon Musk paid $44B for a real-time prediction market? Or perhaps its a tool to read public attitudes, or even—ala Cambridge Analytica—an opinion steering tool.
These tools are available if one only applies a little big data thinking to public perception models.
So far attempts to use Twitter to predict things, like moves in the financial markets, have not worked out great for the people working on them. By the time Twitter knows what's happening, things are already played out and you can see it in conventional indicators.
On the issue of FTX: Why does anyone think that crypto has enough "real" value/potential/attractiveness to justify multi-billion dollar valuations?
The first explanation I heard for crypto was that it allowed secure, costless, untraceable transactions. But, "secure" is pretty well provided by credit cards. "Costless" is a nice concept, but credit cards and banks are pretty low cost, and who pays for all the server farms generating the blockchain if nobody ever pays anything? "Untraceable" seems mainly useful for criminals, and we've seen that, if law enforcement gets serious, transactions are actually very traceable, which ought to be obvious if the whole history of every transaction is in the blockchain.
So, it seems to me that crypto is, and always has been, a fraud. Maybe not in the legal sense, where people are knowingly selling empty sacks to gullible marks. But fraud in the practical sense that there isn't, and never will be, anything behind the smoke and mirrors.
Sorry if everyone else already knows the answers, but I never seen an attempt to address these in a serious way.
It's digital cash. There are rules beyond just the law that bind how credit card/bank transfers are allowed to take place. These are often onerous, and people tend to pay each other in cash when they want to circumvent them. Cash has physical limitations, so preferably there would be a digital version of it that can still circumvent rules like "you can't withdraw more than $x without asking for permission", "you can't use this money to pay for pornograpy", and "you have to wait 2-3 days for this money to actually transfer". The high valuations are generally a function of the idea that if every person in the world used digital cash, then the digital cash industry would necessarily be a very significant portion of the global money supply.
I've heard this concept. Digital cash is a long, long way from performing this function. For one thing, it's not generally accepted, so its usefulness is very limited. For another, its value fluctuates wildly - you might be paid $100 in digital coins today and find it is worth $50 tomorrow. This might be fun if you're playing with things that don't matter much, but it would be very hard to run a business or household this way. For a third, its security seems to be uncertain - there have been a lot of implosions of crypto markets and exchanges, and some of the crypto never take off at all. Even if BTC itself doesn't implode, it will be hard for anyone other than aficionados to know how to keep it secure. There are risks with currency as well, but people at least understand these risks.
You asked why anyone believes crypto has enough real value to justify multi-billion dollar valuations. This the fundamental answer to that question. You can disagree about the feasibility, but I think it's unfair to paint the entirety of its history as a fraud.
Well, I think it's fair to paint the entirety of its history as having nothing behind the smoke and mirrors. Not that I expect to convince anyone. My financial advisor is recommending some small level of exposure to crypto. I have refused to have anything to do with it. I'm quite confident that I'm right to do so.
I think part of the original goal was to get rid of inflation and force an end to central banking. This is why there is a fixed supply of Bitcoin. It solves the issue of people using the fed to enrich themselves by hoisting unplayable debts onto future generations.
I think crypto may eventually find use as an asset less susceptible inflation, if the entry/exit points become well regulated enough to prevent leverage driven volatility.
If it’s very useful for criminals and not especially useful for non-criminals, then what incentive is there for non-criminals to use it?
If your goal is to create a widely used currency, then you’re going to need it to be used by more than criminals. At some point all the non-criminals are going to realize they aren’t getting any value out of it, and that’s when the house of cards will come crashing down.
This. There are, to be sure, plenty of "criminals" that we may find sympathetic and want to facilitate their crimes. The Hong Kong resident who wants to take his wealth with him as he flees to a less oppressive city, is a criminal by Chinese law. I'm glad that, for a while, cryptocurrencies could help a whole lot of such sympathetic criminals move money around unseen by very unsympathetic authorities.
But if that's *all* your currency is good for, then it's not going to last once the associated criminality rises above the tolerable-nuisance value in the authorties' minds. And I've yet to see the cryptocurrency that any law-abiding normie would prefer to dollars or Euros as a medium of exchange.
The authorities can only affect fiat on-ramp and off-ramp, which isn't a big deal if you don't want to speculate on normies FOMOing in. XMR is already delisted from most exchanges because it is too good at anonymity, and yet it's trivial to get it.
Trade it with some reputable counterparty on localbitcoins, for example, possibly exchanging XMR to BTC/ETH in any of the available non-KYC ways (of which there are many). Or perform the transaction in a country that has legal businesses doing such transactions.
Crypto is not for directly paying at a gas station, it doesn't scale well for that purpose. It's a way to do anonymous business regardless of your location, and a store of wealth that cannot be seized if you use it correctly. Both of these are immensely valuable.
And, the government has demonstrated that it can break the "anonymity" and use the blockchain evidence to identify and prosecute the criminals. It could also lean on banks and other regulated financial institution to make it much harder to convert between crypto and "real" money. Which implies: if crypto becomes really useful for criminals, the governments of the world have the incentive and the means to shut it down so that it is much less useful.
>On the issue of FTX: Why does anyone think that crypto has enough "real" value/potential/attractiveness to justify multi-billion dollar valuations?
Overall crypto skeptic here, but I'll try to say some good things about crypto.
"Untraceable" is mainly useful for criminals and is a net negative... in the world now. But in a worse world 'untraceable money for criminals' is probably a good thing. Whatever your political affiliation, imagine your nightmare scenario political leader taking over and criminalizing your lifestyle or political party or whatnot. If that happens I will be glad that untraceable crime money is a thing. For this reason alone I want some kind of crypto to exist into the future. Though it absolutely doesn't need to be trillions of dollars of market share - in fact crypto being mainstream likely makes it less useful for this purpose because it will be so high profile.
The stuff with smart contracts and other even weirder recent tech is, at the very least, weird enough that I'm open to the possibility of something good being there. I think it's mostly silly (or even mostly scams) but I can't rule out genuine financial innovation could happen (or be happening) that isn't possible in normal finance.
Lastly, Vitalik Buterin seems to be the real deal. So many crypto people sound like marketing bullet points or their entire conversation is about prices going up, Vitalik genuinely seems to be in crypto for the tech and ideas.
This issue with smart contracts is that they create more issues than they solve.
"Oh I don't want to pay pricey lawyers and have them meddling in my transactions with others, instead I will use this protocol where all the rules of the transaction are written into the transaction itself!".
"Umm who is going to write those rules for you?"
"people who are experts in rules and trasnaction pitfalls of course."
"So lawyers?"
"umm yes I guess lawyers"
"And um who is going to make sure those rules the lawyers wrote are actually coded correctly into this transactions and will be executed in the way you intended under the circumstances you agreed to?"
"Well I guess we will need some crypto/coding experts too"
"Ok so in an effort to escape from needing lawyers mediating your contracts you have now crated a situation where you need both lawyers and coders..."
One driver for wanting some outside-the-mainstream payment mechanisms is the use of the banking system as a mechanism for censorship, as with the Canadian protests, or earlier, the US credit card companies not allowing donations to Wikileaks.
Well, to escape government monitoring and interference, you're pretty much always better going low-tech than hi-tech. The telephone was a boon for government spying, and social media (as a way of communication) even better. It's really, really, hard to monitor what people say if they tend to say it to each other's face, or pass samizdat around. It's much easier if they transmit it over big static physical infrastructure (governments are very good at dominating fixed physical assets) operated by big companies that are quite dependent on a favorable regulatory environment.
Plus if you're in an environment where you don't trust the government, why are you trying to send money to or from recipients that you don't trust? Maybe they work for the government and you're being set up! I would think in an oppressive situation what you want is a way to transfer money to and from people *that you already trust*.
We already have tons of examples of what happens if all transactions are traceable. For instance, the worldwide sanctions on Russia after it declared war, wide-scale coordination of financial services against sex work, or the situation where people donated to the trucker protest in Canada and then others wanted to trace the donations and cancel them.
Even if one agreed with all these prohibitions so far, that still leaves the problem that the values of governments can change over time, and then it would be very convenient for there to be a method of making transactions that cannot be prevented.
Theoretically, as I understand it, crypto comes into its own in the hypothetical future that central bank shenanigans have destroyed all credibility among fiat currencies, such that Joe Average is just not willing to be paid in dollars or yen or euros anymore, but insists on BTC (or whatever), and all the stores take it because that's all anyone offers, and people live happily ever after because it is no longer possible for governments to manipulate the currency for political ends, or impose currency controls, so stuff like inflation/deflation or the decouping of interest from risk just doesn't happen any more.
The rational reason to invest in it early on was pretty much a land-rush speculation kind of choice, where you said "ah someday this worthless swamp with nothing but mosquitoes and cholera in it will be New York City and cost $100 per square inch, so if I buy 100 acres atj $1/acre right now I will be exceedingly rich in that future."
This has always been the main argument against crypto (which I've made consistently). The counterargument is something like "the existing financial system is kind of bad in various ways so crypto could potentially replace it, in which case it would justify it" (SBF's fundraising presentation included a bit about "I want our customers to be able to use it to buy a banana anywhere in the world". Transferring money like that is a legitimately nontrivial problem right now.)
Sorry to everyone if this was already pointed out, but... isn't this St. Petersburg shit just martingaling? Its flaws as a strategy have been well known since literally the 1700s, if this is the quality of thinking in EA circles it's shocking that they were even trusted with one dollar of fake money. These people need traditionalism like pagans need Jesus.
No, St. Petersburg paradox is not martingaling (contra the Wikipedia article on martingaling, which asserts that it is, citing an abysmal source). I'll give you my understanding of the difference, and someone with better math than I have can tell me where I'm wrong.
Martingaling means cutting the bet in half after a win, and doubling it after a loss. It doesn't have positive expected value for any finite number of bets, but has a positive expected value at the limit.
The St. Petersburg paradox involves a series of doubling payoffs at 50% likelihood each, but going to zero permanently at the first loss. The expected value is positive for any finite number of bets, and goes to infinity in the limit. There's no rational point to stop playing according to naive EV calculations, and yet you're guaranteed to go bust at some point - hence the paradox.
The two have totally different mathematical properties. The martingale makes no mathematical sense to even start, while with the St. Petersburg paradox, it makes no mathematical sense to ever stop. This remains an unsolved problem.
SBF seems to have had the belief that in charity, dollars are equivalent to utilons, at least at the sums he was working with. Going from $14 billion to $28 billion allows you to generate twice as many altruistic utilons, in this thesis. Of course, crashing to $0 doesn't just bankrupt you. It does what it's doing right now, which is creating all kinds of disorder for the EA movement. And it seems silly to me to claim that money doesn't have diminishing returns in real life.
Furthermore, SBF wasn't being offered any sort of St. Petersburg paradox deal. He was making money through some combination of normal market activity and fraud. The assumptions underpinning the St. Petersburg paradox do not apply in real-world finance. Nobody's offering an infinitely repeated 50% chance of doubled-payoff-or-bust.
The solution to the St. Petersburg paradox is well known: It only makes mathematical sense to never stop, if the bank is infinite. And pretty big is not infinite.
You could try linking to a credible source supporting your statement. Your proposed solution is not on the Wikipedia article, and it's also not on the Stanford encyclopedia of philosophy's page on the subject. If I can't find it in either of those locations, I'd be surprised if it's "well-known."
As best I can understand the thinking went something like this:
1) You trust that your trading/arbitrage strategy gives you a positive EV in the long run.
2) You take bigger positions than would a trader who is worried about "gambler's ruin", a run of bad luck that wipes out your bankroll.
3) The reason you don't worry about your bankroll being wiped out (I think) is that you are merely one altruist in the game, so it is not your individual bankroll that matters but the collective bankroll of all altruists everywhere. Your personal loss is irrelevant; altruists win if they continue to make positive EV bets. (This, I think, is why the St. Petersburg Paradox can be ignored.)
4) Another reason you bet so aggressively is that the utility of your winnings grows linearly because you will donate them to charity, as opposed to logarithmically, which would be the normal assumption if you were only using your winnings on yourself.
5) In the interview with Tyler, SBF says something about assuming "independent worlds". Not sure, but I thought maybe he meant that enough bankrolls exist in the multiverse so that even a very low positive EV justifies extremely aggressive bets. (I really might be misunderstanding that one, though. He doesn't use the word "multiverse".)
1) The thing described by SBF on that podcast is pretty much the opposite of martingaling. Instead of repeatedly doubling down until you win (with epsilon risk of losing everything) you repeatedly double down until you lose (with epsilon chance of winning everything).
"Instead of repeatedly doubling down until you win (with epsilon risk of losing everything) you repeatedly double down until you lose (with epsilon chance of winning everything)."
If God tells you that he's going to kill you (or worse) if you aren't supreme ruler and owner of all the Earth by this time next year, what else are you going to do? A whole lot of double-or-nothing bets at least give you that infinitesimal chance - and it will be exciting and fun while it lasts.
An awful lot of EA, possibly including SBF, seems to believe that the AI Gods will consign them all to bignum simulated Hells unless they manage to, not quite rule the world, but at very least exert a substantial influence over the computing industry in the next few years - and that's probably going to take many billions of dollars. If you don't have that sort of money, what else are you going to do?
You have to be very smart to be this stupid, which is why firms that hire on people familiar with all these paradoxes and thought experiments and maths problems have old guys in charge who can and will put a stop to gambling with the funds.
The FTX/Almeda problem was that there weren't any old guys in charge who wouldn't know a logic problem if they tripped over it, but do know how markets work in the real world.
I still feel like it's obviously acceptable to engage in unethical activities for the greater good, i.e. defrauding investors in order to send their money to deserving causes. The lesson here is that it doesn't serve the greater good to do it in a really obvious way that gets you instantly caught.
You can never be sure that you've figured out what will happen, so certainty is unjustifiable.
That said, if an activity is unethical, then it *is* unethical.
The catch is that what is "unethical" is highly context dependent. And outside observers may well have a different opinion of it than you do.
Consider: What is the difference between a soldier defending his homeland and a terrorist?
OK, what about the Cossacks raiding Lidice, were they loyal soldiers honorable following orders or brutal terrorists? On what basis are you deciding?
Yes, that's an intentionally extreme example, but I would predict that were you to have asked one of the Cossack troopers if his actions were moral and ethical he would have said "yes".
If you don't like that, consider George Washington, noted traitor against Britain. What about the famous British General, Benedict Arnold?
The problem isn't that it's sometimes justifiable to engage in unethical behavior, it's that it's often quite difficult to define.
I think you can make entirely plausible arguments for this sort of thing in the abstract, but that in practice the unethical behavior is then applied far more widely/loosely than the original argument implies. As an example, one justification for the US torture program in the war on terror was a utilitarian/consequentialist one involving a nuclear terrorist. And yet, the way torture was apparently actually used was very different, and much wider, than was imagined in that original justification. We were not torturing terrorist masterminds who'd planted nukes in Manhattan, we were mostly torturing nobodies who might have some information we wanted, or torturing terrorists already in custody for months or years to get evidence in other cases.
I think this is the usual situation. There are indeed weird edge cases where it would be morally acceptable for me to cheat on my wife (involving her being braindead, or time travel, or some other bizarre scenario you can spin up). But if I start working up moral justifications for cheating on my wife, the most likely bet is that I'm going to end up justifying infidelity of a more standard "sleep with your hot coworker" type.
"I still feel like it's obviously acceptable to engage in unethical activities for the greater good"
To which I can only reply with a quote from "The Blue Cross":
"Reason and justice grip the remotest and the loneliest star. Look at those stars. Don't they look as if they were single diamonds and sapphires? Well, you can imagine any mad botany or geology you please. Think of forests of adamant with leaves of brilliants. Think the moon is a blue moon, a single elephantine sapphire. But don't fancy that all that frantic astronomy would make the smallest difference to the reason and justice of conduct. On plains of opal, under cliffs cut out of pearl, you would still find a notice-board, 'Thou shalt not steal.'"
"Do evil that good may come of it" is not permissible, and the ends do not justify the means.
From "The Flying Stars":
"Men may keep a sort of level of good, but no man has ever been able to keep on one level of evil. That road goes down and down. The kind man drinks and turns cruel; the frank man kills and lies about it. Many a man I’ve known started like you to be an honest outlaw, a merry robber of the rich, and ended stamped into slime. Maurice Blum started out as an anarchist of principle, a father of the poor; he ended a greasy spy and talebearer that both sides used and despised. Harry Burke started his free money movement sincerely enough; now he’s sponging on a half-starved sister for endless brandies and sodas. Lord Amber went into wild society in a sort of chivalry; now he’s paying blackmail to the lowest vultures in London. Captain Barillon was the great gentleman apache before your time; he died in a madhouse, screaming with fear of the “narks” and receivers that had betrayed him and hunted him down."
Chesterton was wrong though. God isn't real. All the atheists and materialists he makes fun of in his stories were actually just correct in their philosophy. Also he was fanatically racist even by the standards of his day, which suggests to me that he's morally unreliable.
I think the counterargument presented above is at least partly "if you think think you're doing the good version where it won't blow up, assume that you're wrong and you're actually doing the bad version where it will."
Probably true but hard to prove, as there's no way to know how many people commit massive amounts of fraud and don't get caught. Maybe it works 99 times out of 100 but we only hear about the guys who fail!
Should you ever take any kind of risk without assuming you're misevaluating its safety and actually it's going to blow up in your face? If so, what makes "unethical activity for the greater good" type risks deserve to be treated differently?
"Should you ever take any kind of risk without assuming you're misevaluating its safety and actually it's going to blow up in your face?"
I think that is actually a pretty good rule of thumb. "Measure twice, cut once" and the like advice. No matter how expert you are, there is always the chance something can go wrong, and if the results of "this going wrong" are losses in the billions, then you should be even more cautious.
If Bankman-Fried had taken clients' money and punted it all on the favourite at Haydock in the 3:30, it would be patently visible how stupid this was. Gambling on crypto is still gambling.
I think my claim is that there are certain rules it's inadvisable to break, whether for the greater good or for your own practical benefit.
For example, "never go cave diving unless you are a professional cave diver". We implement these rules not just because they're good ideas, but because a normal person evaluating the situation rationally is very likely to make mistakes. You think "a cave, whatever, how bad could it be, I'll just go in and out", and then apparently according to everyone who discusses this topic you get disoriented and die.
I don't think all risks are like this. I never hear any rules about "don't walk into caves on land". There are just some risks where everyone agrees you're going to make mistakes unless you follow the time-tested advice.
I think this is useful for personal things where you don't want to die (like cave diving), but extra useful for moral risks where you don't have the right to risk other people's safety and so we would want to agree on a compact where no one does this.
Is that the kind of thing you're asking about or am I missing your point?
> I never hear any rules about "don't walk into caves on land".
If you need some nightmare fuel, google the Nutty Putty Cave, then look at YT videos of people going through similar caves. To be fair, that does look like a terrible idea immediately, but it's a lot more terrible once attempted.
I think I'd still just trust expected value assessments in the cave example. But since I don't know anything about cave diving, my assessment would mostly rely on received wisdom.
If I was some cave scientist, and developed a new, well tested, theory on cave diving risk, and the cave was full of gold, I'd totally ignore the rule.
If the cave had "a chance to solve all the worlds problems" at the bottom, I should go in even with near certainty of death.
The stuff SBF was doing looks a lot like the kinds of risks that all the really big financial institutions did originally, to get big in the first place.
I don't know what FTX's end game was, but it could have been something super ambitious, like "replacing all established currencies with SBF-coin and taking over the world", which would be totally possible if you took on enough risk and got lucky enough. Which would easily be a big enough pay-out even at 0.01% chance of success.
I think this FTX / SBF has a lot of parallels i the 19th Century banking business in the USA.; give us your gold. We will give you a bank note. All your gold is stored in a nice safe building and you don’t have to worry about stashing it away. You can just carry around this piece of paper and use it to buy things.. it’s not a bad idea, but it sure malfunctioned a lot.
"If I was some cave scientist, and developed a new, well tested, theory on cave diving risk, and the cave was full of gold, I'd totally ignore the rule."
And then some poor son of a bitch in the aqua rescue squad would be tasked with getting your bones out of there.
If anyone is still sleeping on Star Wars Andor, please give it a try. Fans of 'Tinker Tailor Soldier Spy' in particular. So much juicy Star Wars bureaucracy and just plain day-to-day life. Action scenes are used sparingly but when they hit, they hit hard. Blasters never felt so deadly.
It's fun in a lot of ways, but as someone who's been watching in when exhausted after work I'm having a lot of trouble keeping track of what the hell is going on with a lot of the threads.
Sometimes I have trouble watching it after a tough day because it's such a bleak and depressing world to mentally exist in after a tough day of work where I'm already stressed or sad. Though once I start an episode I'm totally riveted.
And so far they keep pulling back on the darkness to keep me bought in. Often more in the setting than in main characters. Ferrix isn't a den of scum and villainy, it's a community where neighbors check in on the old woman who never turns on her heat. Prisoners try to cover for and protect an old man who can't keep up the work quota. It's weirdly hopeful amidst some grimdark bits in other places.
EA needs to deal with the fact that its sociopathic framework will attract sociopaths.
I call it sociopathic because it frames problems and solutions in a very utilitarian maximization approach without much regard for individual emotions from others. This might be even the best framing for some problems. But, people with sociopathic tendencies will be attracted for such framing. And these sociopaths are the ones most prone to in the end do egotistical stuff that harms many people.
But I would add that I think there is a large amount individualism in EA, even though most of the things EAs care about are better solved through collective action.
I agree it's more likely to attract sociopaths, but not so much as adherents but rather as predators. People who have strong beliefs in their own powers of reasoning and perception are better than average marks for con artists. That's one reason why the long con almost always begins with flattery. ("Now some people are so stupid they can be taken in by fraud, but you sir I can see are made of better stuff, you would spot a con in an instant. So let me tell you of my golden 100% honest fabulous opportunity, which only people with your giant brain can even comprehend...")
That's called the Kansas City Shuffle. It depends on getting the mark to work out that a con is going on, but keeping the mark ignorant of the real con. While the mark is thinking they're so smart and are going to walk away with all the gains, you are milking them like a cow:
"In order for a confidence game to be a "Kansas City Shuffle", the mark must be aware, or at least suspect that he is involved in a con, but also be wrong about how the con artist is planning to deceive him. The con artist will attempt to misdirect the mark in a way that leaves him with the impression that he has figured out the game and has the knowledge necessary to outsmart the con artist, but by attempting to retaliate, the mark unwittingly performs an action that helps the con artist to further the scheme.
The title refers to a situation where the con man bets the mark money he can't identify what state "Kansas City" is in. The mark, guessing that the conman was hoping to trick him into saying Kansas, identifies Kansas City, Missouri as his answer. The con man then reveals that there is a much less well-known Kansas City, Kansas meaning Kansas was actually the correct answer."
Famously, this is alleged to have been true of Madoff's scam--very sophisticated investors had funds with him, and it seems like they must have known his returns were implausible, but they probably thought he was doing some unethical thing to someone else (frontrunning trades or something) and furnishing them money. A bunch of them ended up losing a huge pile of money.
Are sociopaths actually disproportionately represented in EA? You're stating people need to "deal with the fact" of something not actually established.
Sure, but surely it’s enough for us to make an update on our priors towards “maybe EA should think about how to avoid accidentally giving a lot of money to sociopaths.”
I disagree. Sociopathy, as it’s generally used, is characterized by an incapacity to feel empathy. People who can intentionally do bad things without feeling any remorse or sympathy for their victims are, if not outright sociopathic, at least exhibiting signs of sociopathy.
And I strongly suspect this applies to most people who commit large-scale fraud.
I think people who truly had strong empathy for all humans would be more likely to do this sort of fraud. What protects us is that normal people have more empathy for the people near them than the millions of distant strangers who are deeply suffering.
And even that may not protect us. Robin Hood is usually seen as an empathic character. He personally knows the poor people relying on him better than the rich he robs, so empathy compels him towards crime.
It think this sounds nice, but ignores how actual human psychology works. That is also what I believe Daniel C was alluding to in the top comment.
There are plenty of people who, in one way or another, fight a group on behalf of another, and I don’t think that by itself requires sociopathic tendencies. People in those situations simply dehumanize their opponents in their mind. A Robin Hood type or a freedom fighter would not actually have non-hostile interactions with their opponents.
I don’t think, psychologically, you would be as likely to get the same effect with fraudsters. They have to lie to the face of people who come to them as allies, and they have to do it over and over for a very long period of time. It’s much harder to dehumanize someone who has done you no wrong and wants to be your friend, especially when they trust you deeply enough to give you huge sums of money.
Humans aren’t pure logic machines. If you are a genuinely empithetic human being, performing this fraud would be insanely difficult. No matter how “logical” it was to rip people off your empathy would stop you. Nor would this empathy be something you could just turn off at will; you would have to be constantly fighting against it. Very very few people would have the conviction to do something like that.
Now, if you didn’t have empathy, that would be a lot easier. That’s the point.
And there’s another piece to this, and it’s a big problem for EA: EA people tend to say things like “people who truly had strong empathy for all humans would be more likely to do this sort of fraud.” If you’re the kind of person who would commit this kind of fraud, then the EA community is a great place to find yourself. There aren’t many other places where you can act blatantly sociopathic and still be lauded as an incredible person for doing so.
"The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”."
Scott, in light of your previous Contra Resident Contrarian post, please forgive me if I find this a little bit ironic. I am sorry you and other people are suffering because of this. I hope any damage can be minimized. But perhaps it might be time to update toward being a little bit more of a skeptic when it comes to unverified claims that other people make?
"Trusting people to accurately report their internal mental state in a medical context is good" =/= "Trusting people to accurately report whatever regardless of incentives is good". I would have hoped that on a 'rationalist' forum this distinction would be obvious.
Keep in mind that cryptocurrencies exist because of people who don't trust the normal fences, safeguards and institutions of our society. You buy bitcoin because you don't trust The Federal Reserve to responsibly fight inflation, or because you want to skirt the law and buy heroin or something else currently illegal.
Trust itself tends to be a two-way street. You should be more wary of people who themselves have low levels of trust in others. Therefore, losing trust in humanity because some crypto-traders turned out to be conmen is like losing trust in humanity because a heroin dealer in a dark alley ripped you off. Maybe put your trust in people living on the more central boulevards of society than in a counter-culture that has recently emerged under its bridges because they have renounced those boulevards.
And this is why I haven't touched crypto with a 10-foot pole. Not surprised by the crash, but sorry for lots of well-meaning people who are now feeling the fallout.
Suppose I was arguing with a Holocaust denier, and I said I believed all the eyewitnesses and expert historians, and explained that it's dumb to doubt a bunch of honest-seeming people, and he said that was stupid.
Then some crypto people who I thought weren't doing frauds turned out to be doing frauds. Does the Holocaust denier have the right to say "Ha! You proved that you're too trusting, so now you have to admit I was right"?
Probably I need to change something in my belief net somewhere, but this shouldn't license people to assume it's whatever helps win their particular argument with me.
I think the analogy would be more accurate if you took the side of the Holocaust denier in the argument, because in the "Contra Resident Contrarian" post, you are mostly siding with people making claims that go against the mainstream opinion and/or the preponderance of evidence. I interpreted your post as an argument that reports of unusual mental states should be believed as a default, even if these reports go against mainstream opinion or the preponderance of evidence. Extending that claim to the Holocaust denier would imply believing that the Holocaust denier is genuine in his belief. Thus, even if you rightly and justly disagreed with his conclusions, it would mean not condemning or judging him for it, and giving him the same consideration as you would give to a spoonie. Please explain to me where I err in reaching this conclusion.
Sorry, that reads a bit like motte and bailey, to me. Ucatione is not even implying it is unsound to accept the Holocaust or round-earth as facts for all practical matters. Might even trust wikipedia (as I do). Still: Science and even wikipedia do err at times, but even they do NOT see it as solved that crypto is safe&great, AI will change everything and EA is the best way to do most good - or that most could get really, really high on meditation if they just tried a bit more. Not all Thais are monks.- Street-wisdom solved: who asks for bucks "to buy a ticket" is a beggar ( I would not say fraud, cuz we all know, if we cared to know.), - so is the young mom with baby in Calcutta who asks you to buy her a can of baby formula at that kiosk. (She will sell it back when you left.) - It is possible even some higher-ups on FBX were not aware of anything wrong. More likely they thought those leverage-deals (or whatever was going on) as super-smart and legit. Maybe SBF did, too. It worked out so well, didn't it? And as ever: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” Upton Sinclair - You may find fine people at the FDA even, "trust" them alright, just do not place bets on that. Or do, but yeah: check the odds. ;)
How will you FURTHER update if the stories that Ukraine funding Congress passed was laundered back through FTX to Democratic campaigns turn out to be correct?
What is the proof? I did due diligence research on Twitter, and there are no real proofs of anything, just miriads of accounts posting how it's "Confirmed". The only facts seem to be that FTX converted crypto to fiat for Ukraine and FTX donated to dems (measly $40m), and those facts do not imply anything. Other conjectures are given without proofs. Based on how heavily it is being promoted on Twitter with the gutted misinformation team and Putin-friendly owner, it looks like just another "bioweapon pigeons" psy-op du jour.
I think we should hold off on the conspiracy theories. Right now there are a lot of rumours swirling around and some lurid readings of the situation. Jumping in with political bribery rumours isn't helpful.
I haven't been following this. How would you tell the difference between this version, vs. "Congress funded Ukraine because they support Ukraine, Ukraine invested in FTX because it was a fast-growing company, SBF/FTX donated to the Democrats because he liked Democrats"?
The claim is that the money Ukraine invested in FTX came out of the money Congress gave to Ukraine, rather than that money being spent on aid and weapons. Of course, dollars are fungible, and it’s possible Ukraine just decided to park the Congressional cash in FTX rather than a regular bank; but given previous “10% for the big guy” Ukrainian transactions, it’s worth looking into.
Well, first off, most of the US aid to Ukraine wasn't "here's some money, go buy weapons", it was "here's some weapons, don't waste your time in the market, just get back to killing Russians". Those weapons are kept on the US books as assets with a substantial dollar value, so you'll see news stories about how congress voted to give Ukraine another X billion dollars in aid, but that's not billions of actual dollars.
Second, how much *money* did Ukraine actually invest in FTX? I believe that at least some of the aid Ukraine has received (from anonymous foreign sources, not the US congress) came in various cryptocurrencies, but the people who sell things Ukraine needs don't take bitcoin so they'd want to use some sort of crypto exchange to turn that into hard currency ASAP. But did they actually give serious amounts of real money to FTX?
Third, there doesn't seem to be any evidence for any of this.
Fourth, that third point should have been the zeroth point, obviating the need for any of the rest, but here we are.
Hold on a moment. I think we need a source for anything having happened at all before asking for a source for the reason. I would find it highly surprising if the Ukrainian government were putting any cash in FTX for any reason.
This question is posed by the person above you as a shadowy conspiracy but isn't it what happens in politics all the time?
I don't know if the allegation is true, but if it is, it would hardly be the first time that someone who gets money from the government via some policy (directly, in the sense of "government spends money on thing made by company X" or indirectly, like "government relaxes (or imposes) regulations that cause company X to save money at the expense of the public"), gives money to the politicians who supported the policy.
This seems like a worse version because it only makes sense if Ukraine was knowingly investing in a ponzi scheme.
>5: In light of recent events, some people have asked if effective altruism approves of doing unethical things to make money, as long as the money goes to good causes. I think the movement has been pretty unanimous in saying no.
I haven't had significant interaction with the EA movement, but I *have* read The Most Good You Can Do - AFAIK fairly close to an EA Bible - and I remember thinking that Peter Singer leant quite heavily into this. I mean, sure, he didn't advocate fraud IIRC, but stock-market speculation (which he did advocate) is basically zero-sum and as such it boils down to a coat of rationalisation over "rob people and use the money better than they would"; it's a very short hop from that to the thinner coat of rationalisation that is "defraud people and donate the money to charity".
So, y'know, maybe the movement as it now exists mostly disclaims fraud, but it's not surprising that fraud shows up when it's a fairly-reasonable conclusion to come to from a principle outlined in one of EA's founding texts, and I consequently suspect "unanimous" is an overstatement (like, sure, the people who think fraud in the name of charity is great probably aren't going to *say it in public*, because lol fedposting, but they're around).
(I have actually made the "Singer advocates robbery" argument a couple of years back, but I hadn't thought of the fraud part before so feel free to discount that part as hindsight bias.)
I think there's an important difference between winning zero sum games against people who've consented to play them (like in stock market speculation and other gambling) vs stuff like fraud, robbery, etc.
There is a very big difference in my view between trying to win a locally-zero-sum contest with positive externalities (or even neutral externalities, making it actually-zero-sum) and defrauding people, which is negative-sum. Someone who tries to win a contest for money (or prestige etc.) is not actually a "very short hop" from defrauding people for money, as should be fairly obvious from all the non-fraudulent contest-winners (and even more non-fraudulent contest participants, who aren't selected for having cheated).
Playing zero-sum games is not unethical in the same way, and I might even say not unethical at all.
Everyone's allowed to participate in a race, or a lottery, even though by participating you decrease others' chance of winning directly. It's not robbery because everyone participating chose to do so with the knowledge that the game was zero-sum, and took their chances.
Of course, it's better to do positive-sum things, but winning a zero-sum game against other willing participants is not 'robbery' even if it does reduce the amount of resources avaiable to them.l
Wishing everyone the best! If you’re among the folks impacted, remember to get yourself health resources you need. Positive investment in yourself now is a positive investment in all your future projects.
Here's a video I found interesting, about three thinkers who have influenced Putin's ideology: Ivan Ilyin, Lev Gumilev, and Carl Schmidt. The author's discussion of "Russian Lawlessness," the notion that the law in Russia has always existed in the service of the powerful, and never as a restraint upon them, is particularly worthwhile.
The author also mentions Alexander Dugin, who often comes up in modern Kremlinology, and argues that Dugin has not been particularly influential. He is more a popularizer of ideas than an originator of them. Ilyin, Gumilev, and Schmidt are the actual sources of the ideas driving the modern government of Russia.
I'll take a listen. But sounds broadly accurate. Dugin's schtick is to tell the west he's prominent in Russia (which he is somewhat) and then to turn around and leverage that into being important in Russia. He's not that important to Putin's ideology or that prominent among regime supporters (though he has his supporters among Putin's supporters).
Putin is particularly fond of Ilyin and Gumilyov. I've heard some references to Schmitt but that's more popular with Xi. He's apparently particularly fond of the Nomos. There's not a ton of difference but I suspect it shows Xi's Marxist upbringing that he favors Schmitt who's relatively legal-materialist in his outlook. Meanwhile Gumilyov and Ilyin are more, for lack of a better term, national-spiritualist. Particularly Gumilyov.
ETA: I have a lot of nits but no major disagreements. I also didn't learn much though. Not sure quite how to convey this. Nothing said is wrong exactly and the general direction is correct but there's a lot of small details that add up to some notable oversimplifications imo.
This extract is from a local (Australian) paper today:
Shockwaves from FTX will be felt around the world, with FTX’s 1.2 million customers, including those locally, now realising they never owned any of the bitcoins or other digital currencies acquired through the exchange.
Among other things, FTX essentially sold a “paper bitcoin” or an IOU from the exchange, which barely had any asset backing. As investors tried to close out their funds, it became clear that nothing was there. In addition, it became a major custodian for start-ups to hold their cash raised in funding rounds. FTX is understood to have offered to back a number of start-ups financially if they used its facilities.
I'm crypto naive. Is this correct? If so, how is it that nobody tried to withdraw their holding over the entire lifetime of the fund to put it somewhere else? Or did they, and the bitcoins magically appeared (purchased on other exchanges) for these infrequent cases, so nobody was any the wiser?
Not sure if I understand what you're asking, but presumably FTX had a lot of assets/cash, but just not every single asset/cash. So if one guy wants out or some small percentage want out on any given day for random reasons, they can handle it. But when everyone wanted out, it all crumbled.
I remember reading a few years ago (I think it was probably this: https://www.schneier.com/blog/archives/2018/06/regulating_bitc.html ) about how in the traditional-money world, there are gold merchants, where you can buy a specific bar of gold and they will track that you personally own that specific bar in that vault right there, and then there are banks, which write down how much money you've deposited but don't keep track of the specific physical bills you handed them and in fact loan most of that money out to other people so they only have a fraction of it on-hand at a given time.
And in the crypto world, most people assume that exchanges operate like the gold merchants, where there's a specific bitcoin in a wallet with your name on it, but that most of them actually operate like banks, where they just have a ledger saying HOW MANY bitcoin they owe you, and at any given moment they probably don't have enough bitcoin to pay back everyone simultaneously.
(And yet they generally don't follow the laws that real banks are forced to follow in order to protect their customers, and don't have FDIC insurance backing up the deposits. I've been vaguely expecting that cryptocurrency exchanges are going to gradually repeat all the banking disasters that lead to current banking laws, unless something else killed them faster.)
You CAN create your own crypto wallet with your own cryptographic keys that you generated and managed yourself, but hardly anyone does.
I know that for Bitcoin, there's actually a limit on the number of worldwide transactions per unit time, and people bid on them in order to be able to use them, and so moving money from one wallet to another wallet actually costs money every time you do it, so there's an incentive to do it as little as possible. Not sure how many cyptocurrencies that's true of.
The problem with FTX is not so much that fractional reserve banking is necessarily bad, it’s that they said they didn’t loan out funds, loaned them out anyway to bail out their crypto hedge fund, and then lost it all. Banks are pretty restricted in what they’re allowed to invest in. FTX mostly just committed big time fraud and blew up
This is more-or-less true of all cyptocurrencies, though Ethereum is experimenting with ways to post verifiable-summaries of off-chain transactions that allows for better scaling (but I'm not sure how much better in practice)
At some point FTX transitioned into a Ponzi scheme but it still had lots of assets so as long as the amount of withdrawals is low the fraud can continue for a while.
I ran across an interesting take a while ago in a linguistics textbook about a factor that I think plausibly contributes to the worse cognitive skills of lower SES people. It was a study of how mothers talked to their small children (maybe age 3-4?) when asked to explain a task to them. The task was something like to go through a box of different colored blocks and put all the red ones in a bowl.
So the high SES mothers sounded something like this: "Becky, see this bowl?" [waits til Becky nods] "OK, so the blocks in here are a lot of different colors, right?"[waits til Becky nods]. "So I want you to look inside and find a red one" [ Becky rummages in box and pulls out an orange one ] "Actually, Becky, that block is a little bit red, but it's not a really red red. That's an orange block. Can you find me a red block that's really really red?" [Becky takes out a red block]. "That's right! Now that is a red block. Can you put it in this bowl over here?" [Becky does] "OK, you're doing great, so now your job is to go through the box and find ALL the red blocks and . . ."
The low SES mothers sounded something like this: "Becky, take the red blocks out of the box. . . .. Becky are you listening? Put the dolly down. Take the red blocks out of the box. . . . No that's orange. THIS is a red one. So take all the red ones out of the box . . .Becky! You're not paying attention. Take all the red ones out of the box and put them in this bowl."
So the high SES mothers were putting a lot more effort into engaging their child's attention & breaking the task into parts & making sure the kid understood each part before moving on to the next one. They also gave more praise and less criticism. I'm pretty sure that if you took pairs of kids whose IQ was the same, and taught half them the red blocks task in high SES style and half in low SES style, the first group would end up being better at the task. Seems quite plausible to me that after a few years of living with high vs low SES style instruction the 2 groups would differ quite a lot in cognitive skills.
So why did the 2 mom's differ so much in their style of instructing their kids? Author's theory was that high SES moms are preparing their kids for high SES lives, where they can expect to understand work tasks and have a positive feeling about their job. Low SES moms are preparing their kids for blue collar jobs, where they are expected to obey, and to get in trouble if they do not.
Yeah, I agree that the study I described does not offer any way to disentangle the communication style of higher SES people from their intelligence from the kinds of life experiences they've had. As a lover of language, I think I just find the info in the study intrinsically interesting, even though it doesn't Settle the Question of how and why social class perpetuates itself and why higher SES people test smarter.
I do, though, think that in the case of this task, getting your small child to separate out the red blocks and put them in the bowl, it's not very plausible that differences in intelligence account for the differences in communication style. The task is so absurdly easy for any adult that even the dimmest parent is not going to find understanding and describing it to be challenging, right? You can say that it takes intelligence to realize your child will perform better if you break the task into steps, make it fun, praise their successes, etc., but while I'm sure it's likely that the high SES parents had read more books than the low SES parents advising them to communicate with the child by keeping things positive, breaking tasks down into manageable parts, etc. I don't think it's reading the books (proxy for intelligence) that explain the difference. My parents hadn't read any of those how-to-build-a-smart-and-happy-kid books, but they just naturally talked to me the way the high SES parents did in the study, and I also found it natural to talk to my daughter that way. So why do higher SES parents communicate in way more likely to lead to higher and happier performance on the tasks parents set?
-Comes naturally because it’s likely to be the style they grew up with
-Less stressful life so more patience (note that high SES parent has to stop a couple times to make sure she has Becky’s attention, and once to help her understand red & orange similarities and differences)
-Less reason to fear their child will have a bad life. They don’t know anybody who’s ended up in jail or as a truck stop waitress. Less fear for their child’s future makes them less likely to become impatient and worried when kid doesn’t pay attention or makes a mistake.
-High SES expectation that in coming years child will often be in settings that treat them in ways similar to the way the high SES parent treats their child during the block task: respectfully, etc. Whereas low SES more likely to expect child to end up in setting that bosses child around, demands attention and obedience rather than setting up a situation that facilitates it, punishes inattention and disobedience harshly.
I don’t think the intelligence of the parent affects communication in this task directly — though of course it does indirectly (higher SES is correlated with both less stress and higher intelligence).
Good communication is in no small part the art of imaging accurately your listener's state of mind and adjusting to it. That is especially true for instruction. What could be more diagnostic of raw intelligence than a superior ability to imagine accurately what is going on in the mind of another?
I'm sure all the environmental factors you mention have an effect at the margin, but my WAG is that the primary driver is mother's level of attachment, and the secondary is mother's intelligence[1], in particular her ability to accurately model what the child is thinking.
And if there is a correlation with SES, I would start off assuming it runs the other way, meaning a mother with these intrinsic gifts of nature would prosper relative to mothers who did not.
--------------------
[1] I'm dubious about the whole multiple-intelligences thing, but I'm willing to concede this is at least a different....facet of intelligence, perhaps. I've known mothers who were very bright in well-recognized ways (e.g. could solve differential equations as well as anybody) but who had a hard time "connecting" in this way to their own child. But maybe there was a personality defect that got in the way at that particular task? The rate of personality defects alas seems to rise with intelligence once you get well past the mean.
Maybe it's the way teachers spoke to them when they were kids? Some schools/kindergartens may do the patient, painstaking 'break it down' approach, where they have small(er) class sizes and more staff to interact with children who were trained to take such an approach, where I'd expect that to be higher-income schools.
The 'sit up straight, pay attention, why can't you follow simple instructions?' interaction I would expect from lower-income schools.
I'm doubtful because of cultural differences. Sit up straight, pay attention, follow simple instructions until you demonstrate you're ready for more is a good description of education even of very young children in Asian cultures, and nobody would suspect Asians of turning out poorly educated children. So I don't think any learned style of interaction is the key. I think you can generally teach like a Montessori tutor or like a martinet, and probably get equally good results -- provided you are very aware of how well you are getting across, and you modify your speed, style, timing, et cetera to maximize the results.
That is, I think awareness of the student state is the key to almost all good teaching, and I am rooting that awareness (for children) primarily in attachment and intelligence of the teacher (the mother in this case). I don't doubt that various techniques one can learn can nibble around the edges of the qualiity of the results, but I doubt they touch the core.
And parenthetically I would be cautious about overinterpreting correlations between style and outcome in teaching. I've seen very successful and very poor teaching outcomes in almost every style imaginable, so I'm dubious about the significance of the correlation. Many times I think the conclusions people draw from it are like concluding that umbrellas cause rain (on account of when it's raining you see a lot more umbrellas).
Actually, I have some data about the correlation between intelligence and empathy. So there's test developed by autism researcher Simon Baron-Cohen (brother of Sasha, by the way) called the Reading the Mind in the Eyes test. For some reason, you can take it on Amazon, & Amazon even scores it for you. It's kind of fun to take, and it's here: https://s3.amazonaws.com/he-assets-prod/interactives/233_reading_the_mind_through_eyes/Launch.html
Anyhow, this test, in its revised version, is well thought of, and people diagnosed with autism do in fact get much lower scores on it. So there were a couple studies that investigated whether scores on the Eyes test correlated with IQ. One of them looked at correlation in kids and teens with Asperger's, and found a correlation of about 0.35, significant but pretty modest. (To figure out how much of the variance in the Eyes test score that IQ predicts, you square the correlation coefficient, so only about 10% of variance).
So this doesn't support half of your view, which is that mother's intelligence, as manifested in her ability to accurately model what her child is thinking, accounts for the difference in communication styles.
It doesn't contradict it either. You'll note I put ability to connect -- which is almost definiionally what Aspies lack -- at #1 and intelligence at #2. You're not going to learn anything about the importance of #2 if you compare groups that vary drastically in #1.
Edit: that's an interesing test, thanks. I ended up bang-on average, so I guess I'm neither a super empath nor Asperger's. One thing I found interesting is that I don't think the eyes themselves were that useful, I got more out of the head tilt, the position of lids and eyebrows, muscles around the eyes, et cetera.
Or perhaps low SES moms are teaching their children like that because they've never seen an alternative? How do you think they were treated as children? Poverty is about cultural capital as much as material capital.
Yeah i am equally skeptical of the claim in the last paragraph. Seems like this is a learned behavior between generations. I think they have the causation backwards. Maybe because the high SES children "can expect to understand work tasks and have a positive feeling about their job" due to the way they were taught, they are more likely to remain high SES and the opposite is true for low SES.
The author's theory doesn't necessarily require a conscious decision on the mothers' parts to prepare their children for those lives. So this still comports with a cultural capital explanation.
I don’t know what it predicts about that. Seems like prediction would depend on how much time the kids logged with the nanny, and what nanny’s background was. Many that I have met were 20-something women from Europe, and did not seem much at all like blue collar Americans. Would not surprise me if many had a college degree and families that were middle class or better.
Seems to me you're nit-picking. As I expect you know, there's a lot of variation in how much time the nanny spends with the kids. I'd say 30 hours per week is about average in the nannied homes I'm aware of. The kids also spend quite a lot of time with their parents, plus of course the parents are more important to them than other adults, and the impact of parental communication is probably greater than the impact of an equal number of hours communicating with the nanny. In any case TGGP, I did not suggest that this study settles once and for all the question of why high SES people test smarter. I said it was interesting. I still think it is.
I honestly did not really gain anything from that post. Of course, most of the time the truth is boring. But feels like this point is pretty well-accepted?
I’m hoping to travel to Switzerland over the summer, and was hoping to at least pick up enough of the local language to at least politely ask if they spoke English. My understanding is that the “main language” is Low German, which is apparently a little different than High German, which is commonly taught in America.
1) Is this correct?
2) Is there a good beginner text for Low German to learn from for a native English speaker?
3) Any good shows I could watch in the language (ideally subbed in English)
Thanks
edit just to note that yes, I expect many people will speak english, but it still seems polite to put in an effort to learn their language
I expect that you will find Swiss people *easier* to understand than Germans. For Swiss people (in the German part of Switzerland), they consider Swiss German their native language and High German as their first foreign language. They learn High German at school, so if they speak High German (which they do for foreigners and for Germans), then their pronounciation is much clearer and to-the-book than that of actual Germans.
That is not my experience, as someone who has lived in the German speaking world for decades. Many (I would say most) Swiss Germans have strong accents when they speak Hochdeutsch. Not all of them are Hazel Brugger. Some Swiss Germans even find it easier to speak English than Hochdeutsch (or so they claim). I grant Swiss Germans will probably speak standard German more slowly than a native speaker would, that is helpful.
As others may have mentioned, Swiss dialects are a variation of “High German”, “Low German” includes Dutch and the dialects spoken, mostly moribund, in Northern Germany. That said, the various Swiss dialects are not easily intelligible even to native German speakers. Generally Swiss Germans expect non Swiss Germans to address them in standard German. It is not a culture that is particularly welcoming to outsiders and you may find that people do not appreciate you trying to break in, might even think you are mocking them. To further complicate matters the odds are that anyone you deal with in a customer facing service position won’t be native Swiss anyway and won’t speak Zürcher well. Otoh, everyone you talk in Zurich to will speak English and won’t even be offended in being addressed in English.
Huh, I was under the impression that "High" German was associated with the mountains, closer to Switzerland, while "Low" German was closer to the "Low Countries" like Liechtenstein.
The confusion is because the opposite of "High German" is not "Low German", but it is "dialect". Low German is only one of many German dialects.
It's correct that Low German ("Niederdeutsch", "Plattdeutsch") is spoken in the flat coastal area. But it's actually one of the dialects which are rather close to High German (which originated from the Hannover region, pretty far in the north). Other dialects like Bavarian or the many Swiss German dialects are further away from High German.
You also seem to confuse Liechtenstein with something else (Luxembourg)? Liechtenstein is a tiny country between Switzerland and Austria, that's about as far up in the mountains as it can get.
Ok, as a native German I have never heard this meaning of "oberdeutsch" as "highland German" before, but wikipedia confirms that you are right. Thanks!
I took German classes from two different professors (one from Innsbruck, one from Berlin). Both admitted they wouldn't be able to understand someone speaking Swiss German. But a Swiss German speaker would understand standard German. If I were you I'd just learn a bit of standard German since it will be vastly easier to find learning resources for that.
1.) No. German has regional dialects of which Switzerland represents a few. But there's also dialects in Germany itself, in Austria, etc. Low German is a collective name for these dialects/languages. Switzerland and Germany both have a standard version of German called High German. While dialects can be very difference the High Germans are similar enough it's mutually intelligible. The phrase for "do you speak German" is the same in any case. Additionally, it varies by canton. In the French cantons or Italian/Romansh country German will not be used day to day.
2.) Low German is not a language. It's a collective name of multiple languages/dialects. You'd have to pick one. And, to be frank, unless you really invest it's probably not worth the effort.
3.) Any German show would be sufficient. At worst you'd pick up a German accent. The big difference is that the Swiss don't use most of the German special characters like ß and the actual accent.
When I visited Switzerland, I found that, depending on region, people didn't necessarily know the language of a region outside! We mostly operated with (very basic) German, but found a man randomly while asking some directions who was from Geneva and *only* knew French - no English OR German! (Which I know better than German, so that was lucky.)
An expensive politeness signal indeed. As a German native speaker I much prefer you talk to me in fluent English than in broken German. It is easier on both of us. Though you will find others with a different view for sure.
Also Swiss speaks Switzerduetsch (Swiss German) or French depending on the region afaik.
Zurich uses a dialect of German so your basic approach/choice seems sound. I don't know the exact difference or how to specifically learn Swiss Deutsch (which it is often called) but Googling for Swiss Deutsch or Swiss German will probably generate hits.
There are YouTube videos, but beyond maybe “grüezi” (hello), learning a dialect is something you either invest a huge amount of time in or just leave alone. How would someone in Queens feel if a random German walked in their bar and said “hey, how ya doin! I’m toisty heah, trow me a beah.” But with a strong German accent?
Just so I understand your lessons learned (so far):
You rated the trustworthiness of FTX on par with that of Walmart, Google and Facebook. So you’re going to setup a prediction market to help you recalibrate.
You don’t see EA leadership as culpable to donor capture by a fraud because some professional investors who allocated a fraction of their portfolio also invested in FTX. Nothing to learn about aligning with a single donor or ceding brand messaging to a single donor. Or, portfolio theory, aka diversification.
You believe the SBF / FTX circle weren’t technically polyamorous or outright smoking meth, so let’s not be unfair to them. Nothing to learn about unconventional behaviors as it pertains to credit worthiness or operating expertise or key man risk.
You think SBF only started being fraudulent at some certain point and this wasn’t a systemic or cultural failure within FTX or the industry. So nothing to learn about the broader ethical issues pervasive in crypto. Or revising our priors (or whatever it’s called) when crypto firms consistently defraud people with tacit justifications from the community like “it’s not your keys it’s not your crypto”.
The smartest kids, from the most prestigious institutions, with an elite pedigree committed a most ordinary fraud by being conmen. Nothing to be learned about the kinds of fraud committed by intellectuals and the kinds of justifications they tend to use - saving the world or whatever.
There’s approximately 100 million people hearing about Effective Altruism for the first time and it’s in the context of EA being the philosophical motivation of the largest fraud of 2022. And also, the full weirdness of EA is on display as fodder for the upcoming Netflix series and movie(s).
You don’t yet see the connection between giving attention seeking TikTokers the benefit of the doubt and giving overeducated, virtue signaling Slytherins the benefit of the doubt. And why we don’t give people the benefit of the doubt when there are consequences on the line.
You are still in the earliest stages of grief. And I think this will be an incredible moment for personal growth for EA thought leaders.
I think you're being unfair here. It's not like Scott is denying that fraud seems to have been committed. It's also not like any EAs are defending SBF.
Maybe the worst thing EAs could be accused of is taking crypto donations when they don't personally believe crypto is a valuable long-term investment. Cause it does essentially mean they're profiting from a scam.
You’re making a lot of great points here, and I also look forward to reading along as he thinks through them over time.
That’s part of why I think it’s essential that there eventually be some structure to that reflection. The Open Thread post as a way to make it seem less real (or perhaps to share initial thoughts with a smaller and more engaged circle?) is fine as an initial thought sorting mechanism, but over time I think you’ll have an especially insightful perspective not just on consequential events, but on major human themes. I hope you’re able to articulate and share those thoughts.
And taking some time to do so will almost certainly result in a more serious and engaged (albeit smaller) audience.
I don’t see most of your characterization of Scot’s post in his post. Ok, you have your point of view, but I’m not convinced by your effort to stick Scott with it.
I don't think it was even as exciting as meth-fuelled orgies, more that at least one person (Caroline Ellison tweeted about how dumb life was when non-medicated, which indicated she was on Adderall or the likes, that was the amphetamine that got rounded off to meth) was on the usual high-achieving student's study drugs of choice (Ritalin, Adderall, whatever) and that the little gang of pals and colleagues probably all were, or had been at some time.
The orgies bit comes from the perception of poly, which I don't know if it was so; they were generally romantically involved as currently or formerly dating, and they were poly-adjacent. But that they were having drug orgies is, alas for the scandal quotient, probably not true. Or at least not more so than the usual Bay Area circles which are sex-positive, poly-positive, sex work is real work, non-traditional families, gender and sexual orientation is on a spectrum and the whole nine yards.
One of my biggest concerns is that EA is itself playing a status game. You just have to glance through the forums to see the number of people who are more concerned about "optics for the movement" than real problems.
There are many people clearly (including Will, I would argue) using EA to build reputation above and beyond the goal of charitable work. I really don't see how this is so different than other forms of charity. Wordsmithing the rules doesn't change the fundamental status game.
Anonymous philanthropy, doing good without branding oneself a do-gooder, solves this problem. But how do you build a brand/community/movement around doing good things but not taking credit for it?
Outwards, everyone thinks you are making sacrifices to Yog-Sothoth in dark rituals. The true secret for the innermost circle only is ... that it is networking event to encourage rich people to buy bednets anonymously?
For starters if you ever put "build a brand/community/movement" before doing the work it's probably going to end up like this. It looks to me that plenty of people in EA are overtly using it to build their own status, and the movement is mostly just OK with that.
It’s a dangerous, almost unavoidable cycle. A bigger community of people means more people to do good things, but you’re going to attract a lot of people who care much more about the community itself than the good things.
Reasonable speculation, I guess (from your POV), but no, I actually live in one of the “safely” bleeding red areas of Michigan, just a few miles from where some of those White Nationalist militia wannabes got raided for their part in the plot to kidnap and execute Gov. Whitmer. Away from Ann Arbor, the rich people seem to love De Vos’s idea to just stop funding public education. I am interested in how you would help me convince my neighbors with the disabled teenager, that the Republicans in Congress who talk about doing away with Medicaid, Medicare and Social Security don’t mean just for “lazy blacks and immigrant trash”, but for them, too.
>I am interested in how you would help me convince my neighbors with the disabled teenager, that the Republicans in Congress who talk about doing away with Medicaid, Medicare and Social Security don’t mean just for “lazy blacks and immigrant trash”, but for them, too.
Well 1) I don't think it is clear that is what they mean...that is what you assume they mean.
The activism that resulted in the federal education-related disability law - IDEA and section 504 and the precursors - it created such a strong before and after, that if their child did not have to deal with public schools pre-IDEA then they will probably not know how weird it was. See if you can find some circa 1985 videos about special education. Probably almost all the things they access educationally will be absent in the videos and that might be a wakeup call. If I can find any good sources I'll post.
I think some people hate Social Security in part because disabled kids can get benefits for not-always-obvious conditions. And there are clear pipelines for providers who are better at getting the four year olds on Social Security. Some of it seems false to me. Maybe this family is not in that category though. Maybe pointing out that some people blame them, blame SSI in general for quite a few social ills (since SSI is the form of disability income which someone can receive if they have not worked enough quarters but are medically determined disabled, the dollar amount is lower than SSDI but it's still significant), even blame SSI for the struggles of Social Security in general. They have probably relied on Medicaid, Medicare and SSI, and special education for many things. You can develop the discrepancy between their quality of life with and without those things, and then point out that youth were not originally eligible for benefits. For a family in poverty, having a kid who is determined disabled may make the difference between them paying the rent each month and them being homeless, so disability becomes a necessity in some contexts, does that seem backwards, yes, but one can sympathize with the family and realize they need the resource and the child benefits. But some people hate that kids can be eligible, or even that someone who has not worked enough quarters can get money, it seems like a dilution of the original purpose.
That will probably sound mean but it might make sense to them.
“You wanna know what effective altruism means? It means that you steal other people’s money while bragging about saving the world, while taking a big chunk for yourself. That’s what it means.”
SBF being a large public face of EA means that it’s probably not a great time to embrace effective altruism as a brand, even if you strive for similar principles. Maybe even stronger: I’m not sure if the brand recognition of EA will be worth the negative connotations going forward. Probably worth reassessing in a month or so as the immediate storm passes.
I'm not a fan of EA, but i'm even less of a fan of your definition. At any level of income or wealth, an increase in how much you give each year has to be be viewed as an increase in altruism, whether or not it's Effective with a capital E. The idea that the increase in wealth that you experienced or achieved must have resulted from theft is ridiculous and in fact repugnant.
Take it up with David Sacks (it’s a quote from him, as I noted). I brought it up to demonstrate what I think will be a common response to recent events so I could make my subsequent point.
I agree with you that there are ways other than theft to obtain wealth. I’m also pretty sure that David Sacks (who is incredibly rich) doesn’t think that the only way to gain wealth is theft, but you can listen to the linked podcast to get a better idea of his statement in context if you’d like.
There's a really good chance Sacks is right on a whole bunch of levels! Hijacking the direct feedback loop between what you do and how you are rewarded is dangerous.
The "brand" kind of thinking is exactly the problem! It treats the problems as a status problem. If EA was so effective it would just point at the results and keep moving.
My rule of thumb regarding donations from companies: they are used to clean image. If one does not find reason to clean it, it is because this reason is hidden. So, if one is going to accept a donation because of lack of moral concerns, this is an actual red flag.
If a billionaire does really want to be a big donor because of good reasons, they will do it on their own. Their goodness will be clearer to public eyes and they don't have to deal with shareholders' pushback.
I don't know about the specific finances of FTX, but I do know about audits. Auditors work based on norms, and those norms get better at predicting new disasters by learning from previous disasters (such as Enron). The same is true of air travel, one of "the safest ways of travel".
In fact, at the same time that we're discussing this, a tragic air travel accident happened on a Dallas Airshow. Our efforts might be more "effective" by not discussing SBF and instead discussing Airshow Security and their morality.
My guess is that the only way in which alarms would've ringed for FTX investors was by realizing that there was a "back door" where somebody could just steal all the money. Would a financial auditor have looked into that, the programming aspect of the business? I doubt it, unless specific norms were in place to look for just that.
I think that there are two specific disasters here:
- SBF's fraud, and
- SBF harming the EA "brand"?
If what we want is to avoid future fraud in the Crypto community, then the goal of the Crypto community should be to replicate the air travel model for air safety:
- Strong (and voluntary) inter-institutional cooperation, and
- A post-mortem of every single disaster in order to incorporate not regulation but "best practices".
However, if the goal is to avoid harming the EA "brand", then there's a profession for that. It's called "Public Relations".
PR it's also the reason why big companies have rules that prevent them from (publicly) doing business with people suspected of doing illegal activities. ("The wife of the Caesar must not only be pure, but also be free of any suspicios of impurity")
For example, EA institutions could from now on:
a. Copy GiveDirectly's approach and avoid any single donor from representing more than 50% of their income.
b. Perhaps increase or decrease that percentage, depending on the impact in SBF's supported charities.
c. Reject money that comes from Tax Havens.
c.1. FTX was a business based mainly in The Bahamas.
c.2. I don't know what is the quality of the Bahamas standard for financial audits. In fact, I don't even know if they demand financial audits at all... but I know that The Bahamas is sometimes classed as a Tax Haven, and is more likely that we find criminals and frads with money in Tax Haven than outside of them.
c.3. Incentivize their own supported charities to reject dependence on a single donor, and to reject money that comes from Tax Havens.
Perhaps also...
... Campaign against Tax Havens?
- Tax Havens crowd out against money given to tax-deductible charities, and therefore for EA Charities.
- There is an economic benefit to some of the citizens of the Tax Haven countries, but when weighted against the criminal conduct that they enable... are they truly more good than bad?
... Create a certification for NGOs to be considered "EA"?
- Most people know that some causes (Malaria treatments, Deworming...) are well-known EA causes.
- They are causes that attract million of dollars in funding.
Since there is no certification for NGO's to use the name "EA", a fraudster-in-waiting can just:
1. Start a new NGO tomorrow.
2. "Brand" itself as an EA charity
3. See the donations begin to pour-in, and
4. Commit fraud in a new manner that avoids existing regulation
5. Profit
6. Give the news cycle an exciting new story, and the EA community another sleepless night.
In fact, fraud in NGO's happens all the time. One of the reasons why Against Malaria Foundation had trouble implementing their first Give Well charity is that they were too stringent on the anti corruption requirements for governments.
It's in the direct interest of the EA community to minimize the amount of fraudulent NGO's, and to minimize the amount of EA branded fraudulent NGO's.
> My guess is that the only way in which alarms would've ringed for FTX investors was by realizing that there was a "back door" where somebody could just steal all the money. Would a financial auditor have looked into that, the programming aspect of the business? I doubt it, unless specific norms were in place to look for just that.
AIUI there was a very large related-party loan between FTX and Alameda that really should have raised red flags for anyone who knew about it, and this kind of thing has been an issue in trad-fi before so it's not like they wouldn't know it's a potential issue.
There really needed to be some kind of internal governance or guard rails so that that sort of thing just wouldn't be possible. It's easier to avoid committing crimes in desperation if you've already pre-committed yourself to rules in some binding way.
Or maybe that just wasn't possible. Internal controls are always vulnerable to subversion at the top.
Extremely basic and obvious advice that probably no one needs to hear: Please don't have anything invested in crypto that you're not ready to kiss goodbye.
I'm pretty sceptical of the long term value of crypto, but even if you're a lot more bullish, you can't deny it's a highly volatile and risky business.
If you think it's +EV and you have the risk tolerance for it, more power to you. But be actually literally ready to walk away if it all explodes.
Seconded. At the end of the day, the question is what long term needs cryptocurrencies and companies build around them can fill, and how much money there is in it.
Right now, it feels to me that the boom is more from people figuring that there is a bigger fool somewhere that they can sell their tokens to eventually than being convinced that any token is a solid investment in itself.
Then again, I hopelessly naive. I mean, if you have told me in 2010 "people will pay with cryptographic money in darknet marketplaces" I would have said "okay". Then you would have added "and also, cryptographically signed URLs to some web page showing a MS paint image will sell for hundreds of dollars" and I would have been "no way".
So for all I know, the growth of "crypto" is sustainable, and will make us all rich. Personally, I would not bet on it.
Also, FTX is described as a "cryptocurrency exchange" on Wikipedia. Who are the victims then? People who were in the middle of exchanging currencies? People who kept their balances on their server instead of transferring everything to wallets under their own control? People who invested in FTT tokens which crashed? People who invested in the company?
I'm grateful for Scott's internet thing teaching me about deontology and consequentialism. Is there a similar fancy term for the ethical system known as the golden rule? “Do unto others as you would have them do unto you.”
I note that it has a failure mode - what if you enjoy people treating you badly? Then you need to add an epicycle - “Do unto others as most people would have them do unto most people.” Which would be much more difficult, because of typical mind fallacy.
There are two ways to avoid the failure mode -- the Negative Golden Rule, or "Do not do unto others what you would not want done unto you", and the Platinum Rule, or "Do unto others as they would have done unto them." These aren't without failure modes of their own, of course.
My current personal version of that principle is "honor the deals you would've made (if ideal versions of everyone had had unlimited time to negotiate)".
You'd mostly expect most people to make similar deals, like "don't take each others' stuff (with narrow but important exceptions)", but this allows that the hypothetical deal I made with Bob and the hypothetical deal you made with Bob might be different, and the deals might not be symmetrical, but they all have to be things Bob would've agreed to.
Strictly speaking deontology and consequentialism are meta-level debates about why you should act certain ways. The ethic of reciprocity is an object level principle. So for example, you could have a deontological or consequentialist justification for the same rule about doing unto others as they do unto you.
What precisely was the scam/fraud in FTX? I'm not clear on the details, and am not sure which parts are just bad financial decisions and poor circumstances, and which are clearly fraud.
Forbes is *very* salty about the whole thing and may perhaps be being a little unfair, but it's a good look at the general impression this débacle has left on people:
"It was 2017 when Bankman-Fried first began dabbling in cryptocurrency trading. With an untamed mop of hair completing his disheveled gamer look, he’d just quit his job as a quant-trader at Jane Street, and saw an opportunity in his new hobby: the price of Bitcoin was valued differently in exchanges across the globe. If he could buy low then sell high in another region of the world, he realized that he could build a trading floor around Bitcoin arbitrage.
He launched Alameda Research with around 15 employees and traders, bringing in colleagues from Jane Street, like Caroline Ellison, and others like Nishad Singh, whom he had met through the Center for Effective Altruism, a group of thinkers and luminaries that vow to donate much of their wealth and with whom Bankman-Fried had become enmeshed with. “When we joined, his goal was to make a billion dollars,” one of the first Alameda employees told Forbes. “Alameda traders really were beholden to what SBF was doing: he was the head trader, they were the foot soldiers.”
From the start, “Sam wanted to take riskier decisions than the others wanted to take,” said another early Alameda employee. Specifically, he pushed back against efforts by some to slow down risky trading efforts, and overlooked the challenges of extracting capital from shady exchanges. “Sam ran the shop, Sam ran everything, we all trusted him, and believed him,” said an early employee of Alameda who worked with Sam and his close circle. “It was a dictatorship, in a good way, a benevolent dictatorship.”
Details are unclear, since this whole story is only about a week old, but what it looks like is FTX lent large amounts of customer assets to an associated-but-distinct trading entity, Alameda Research, against dubious collateral, probably to prop it up after it took losses (but that's a guess on my part). FTX offered leveraged trading, so just lending customer assets to other customers was permitted (at least to some extent, FTX may have gone beyond what was allowed), but lending against bad collateral means the loan might not be paid back in full, potentially bankrupting the exchange and losing some or all of the customer's assets.
Last week information about this leaked, triggering huge withdrawals and large drops in the value of Alameda's collateral, bankrupting both it and FTX.
Generally related-party loans like this draw suspicion because there's a temptation to pretend that the loan is good even if it isn't. Morally (and possibly also legally) this is fraudulent because FTX was claiming to take good care of its customer assets, but actually traded them for some magic beans which disappeared.
So a regular bank would do a version of this as well, right? The bank wouldn't want assets as risky as crypto coins, and there are regulations as to how risky they can be, how much leverage they have, etc... but I thought 'using customer deposits' is pretty standard? A firm can go insolvent without commiting fraud, right? Is it the 'customer agreement' part that makes it fraudulent?
FTX isn't a bank. Its an exchange. With a bank the deposits are (loosely) property of the bank to do with what they want. The depositors get paid for this through the interest on the accounts (this is a vast vast vast simplification).
With an exchange your deposits are just there to make it easier for the exchange to do the service you are paying it for which is to shuffle your money around between asset classes. The exchange takes a small fee for this.
For example, my savings are at Ally bank which pays me interest on the money deposited. The rate fluctuates based on what Ally can make on that money based on the terms of my agreement with them.
My brokerage is Fidelity, which is NOT a bank. Cash in my fidelity account doesn't earn interest unless its invested in some type of asset (fidelity automatically puts it in a money market account which is very safe but low yielding). But, crucially, fidelity doesn't take my deposits and make loans with it.
Furthermore, the FTX terms of service explicitly stated that FTX could not and would not use deposits to make loans.
The bank generally does not funnel customers' deposits into a company literally at the next desk using that money to make profits for the bank and to pay its debts, and pay the customers in toy currency it invented itself. "See, your ten thousand dollars are safe, here's my 100 Noddy Coins which represent your ten grand (and which you can't cash in or exchange anywhere else but here)".
Wasn't this pretty much exactly the origin of banking? You'd give your money to the House of Fugger or someone, and they'd give you a promissory note that could be redeemed for a similar amount of money at another branch of the house.
"So a regular bank would do a version of this as well, right? The bank wouldn't want assets as risky as crypto coins, and there are regulations as to how risky they can be, how much leverage they have, etc... but I thought 'using customer deposits' is pretty standard? A firm can go insolvent without commiting fraud, right? Is it the 'customer agreement' part that makes it fraudulent?"
The customer agreement part makes it fraudulent.
Also, until very recently banks could not do this because Glass-Steigal (sp?) created a separation between 'retail' banks (take deposits, make loans for cars/houses/etc) and 'investment' banks (borrow money, usually not from retail customers, and place bets on futures, etc.).
The banks are also HIGHLY regulated in terms of how much they can play with their customers' money. FTX ... not so much (especially since it wasn't supposed to be playing with that money at all).
A reasonable 'real bank' analog is probably the MF Global financial scandal from 2011 when customer assets got 'diverted' for MF Global to make bets.
In terms of things that have been proven, I don't know that that part has actually been shown to be illegal/fraudulent.
The withdrawals (as claimed hacks) seems the closest to genuine fraud (I mean, hell, IS genuine fraud/theft) but, again as far as I know, remains an assertion not yet proved.
I'm not exactly trying to play legal games here, more trying to distinguish between "what do we *actually* know" and "it's cool to make fun of the rich guy when he fails".
I have zero dog in this fight (don't know the people, don't care about crypto) but from my limited following of the issue, the whole thing was more "wildly optimistic claims that could, perhaps, have panned out, but didn't" than deliberate fraud. This is, of course, the origin story of basically every unicorn – to take a very different sort of example, it's hardly clear that SpaceX or Blue Origin will ever make money, so talking them up is a set of optimistic claims about how the world might evolve, which is, as far as I can tell, essentially the same thing as FTX and similar were/are doing.
....
Now was Elizabeth Holmes the same thing? That *seems* more cut-and-dried as deliberate, definite, faking results, rather than just optimistic stories about what the future of blood tests might look like, but I'm no expert.
Perhaps this was more like an Enron? Where things started off as a reasonable story about the future, then slowly drifted into not exactly fraud but let's move the accounting around to keep the trend looking good this quarter, till eventually we do get (mainly only a few) cases of deliberate fraud. Lots of attempting to hide the reality, not much real fraud.
I'll leave the judgement of morality and legality to others, I care about the understanding.
My GUESS is that FTX essentially ran down that same path...
Which is, of, course, as a practical takeaway, why Jewish law prescribes a metaphorical fence, a Chumra -- don't try to avoid the implications and reasons for the law by twisting technicalities because at some point you'll be so comfortable with moral (if not technical) fraud that you'll be willing to move on to actual, legally defined, fraud...
> but from my limited following of the issue, the whole thing was more "wildly optimistic claims that could, perhaps, have panned out, but didn't" than deliberate fraud.
nope, he literally built a personal backdoor that allowed him to take 10 billion dollars of customer money without triggering internal alarms and audit.
This is on top of the whole thing being held together by the shittiest accounting practices known to man and overleveraged to hell and back.
Given the well-deserved level of sympathy this stack and its readership has for SBF, I'll try to be as tactful as possible in bringing up another angle that ctrl-f does not yet turn up in the comments. This requires tact not because it impeaches his character (I would argue that it does the opposite) but because it uses tropes ordinarily associated with right-wing conspiracy theories. However, given the amount of updating going on this week it seems fair to at least acquaint ACX with accusations that he was laundering money via Ukranian corruption (for instance FTX running a donations-for-ukraine site that took in $55 mil before before being shut down and deleted -- still on archive.org tho) and was making promises of larger donations to democrat candidates (larger than his $30 mil previously, already the second-largest individual after everyone's favorite bugbear) in amounts not really possible for any sane expectation of how crypto makes money.
How would this be better than simple greed and ordinary fraud? Well, if he was running a racket on behalf of purely political money we have to ask whether or not we care more about campaign finance than we care about winning, and if he got pulled in over his head in old-fashioned dirty money and tried to move some around to keep things going and ended up losing actual people's cash in consequence, that at least demonstrates that he didn't start out looking to defraud them. The analogy is more that someone starting a small investment opportunity also starts working for the mob (after all, they're not violent anymore and are just doing extra-legal finance maneuvers) but ends up losing everything. The running for the bahamas at the end with as much as he can carry? Eh well honestly once the panic sets in none of us are our most altruistic.
I disagree that it is "fair" to "acquaint ACX" with accusations that a casual investigation will reveal are conspicuously unsupported by any evidence. That just boosts the noise while carrying no signal. It only took me ~5 minutes to verify that there is no evidence for the "laundering money via Ukrainian corruption" accusation; I don't know how many other people here wasted the same 5 minutes, but that's all on you. It was your job to do that investigation yourself, and either show the evidence behind the accusation or drop the matter entirely.
Oh, I did. Note your apparent indignation. Consider the mental trap we all have about in-group members. There is no proof of our guy doing anything illegal, our opponent obviously is a colluding with the enemy. Our guy is a pragmatic statesman, the other guy is a dishonest politician. It's one of those irregular verbs, as they say.
My stated purpose of bringing this to your attention is the same as what Scott identified in himself when he wanted to speculate about how perhaps SBF wasn't aware of the fraud -- speculations that soften a bad story about someone we care about and respect.
For my part, I HOPE for his sake he WAS laundering money. It's not a bad thing, just illegal. I'll go to the mat for Robin Hood, too. I'm no expert on the subject, but I suspect that campaign finance laws are like the tax code -- they exist to be circumvented in various creative ways by teams of lawyers and accountants. If someone of SBF's stature was laundering money through Ukraine so as to more effectively support candidates he believed in, I say good on him. And if this went wrong in some way that caused him to end up needing to commit fraud, that to me sounds much more ethical than if he'd just started out planning on scamming the average investor.
Meanwhile feel free to believe that your worldview is composed purely of unimpeachable concrete things. Must be lonely up there.
Heh, yeah after reading the interview I'm including here I retract any positive-sounding things I said about his possible motives. I revise my opinion downward to "guy who is willing to commit fraud and is directly on record boosting something that walks, swims, and quacks like a money-laundering operation at the very least would have laundered money if he could." I'm not going to spend any further brainpower attempting to gild this turd, though.
There's little doubt but that SBF has committed some sort of fraud on a large scale, though we can argue about why.
The theory that you want to "acquaint us" with, requires that all three of A: SBF and B: the US Democratic Party (or maybe just Joe Biden), and C: the Ukrainian government, have committed fraud. But with no evidence for the second two parts other than "this theory isn't literally incoherent". Indeed, it seems to me like the whole *point* of this arbitrary, contrived, unsupported theory is that there are a lot of people who have a persistent desire to accuse the US Democratic Party and/or the Ukrainian government of fraud, have a hard time being taken seriously outside their own bubbles because of the lack of evidence, and so like to invent hypothetical scenarios linking their actual targets to unambiguously-guilty fraudsters in hopes that the rest of us will buy in to guilt by association.
You missed the part where he boosted a crypto-donations-for-ukraine site which reportedly collected $55 million before being shut down and deleted from the internet?
And again, I'm not even against this behavior. I don't regard finance law as really about ethics (I'm not doing the ends-justify-the-means bit). Honestly I'm just shocked at how offended you seem by the idea of democrats doing irregular but ethical things with money. I'm not claiming anyone was pocketing the money for corrupt purposes. Also shocking is that a famously-corrupt (by world standards) regime like Zelensky's is now so defended and lionized that it's bad to presume they're doing run-of-the-mill shenanigans with funding. I really don't get it.
I suppose if you're convinced I'm some kind of right-winger who wants to 'hurt' your team, the vitriol would be warranted. Anyway it's a moot point -- I originally meant it as an elaboration on SBF's motives that I found exonerating, and I've given up trying to defend him and won't speculate further as to his involvement with Ukranian fundraising.
If there were a single person or organization on the planet I engaged in as much motivating stopping to defend, though, I'd ask myself some hard questions.
Where would I go if I was interested in reading more about this topic? Especially concrete evidence or signs that actual money laundering was going on.
I would suggest it is us higher for businesses involved in crypto or that lobby heavily or that suddenly make lots of money and possibly higher in ones that claim a noble purpose.
But with scientific papers, charities, exams, memoirs etc whenever people investigate them fraud is endemic as Scott has shown lots of times.
Maybe I am being overly cynical but you should probably assume a decent share of your friends, businesses you use, charities you donate to, scientific papers that you respect or have entered the zietgeist, books you read are completely dishonest and fraudulent.
You are much more likely to hear about something that is fraud than isn't fraud. Your local grocery store caries 10s of thousands of products that aren't frauds for example. There are thousands of banks in the US that aren't frauds.
Crypto is great source for frauds because its a new industry, with lots of unsophisticated people willing to throw money into it. In the 90s these same crypto fraudsters would be doing pump and dump schemes and in 2000s doing mortgage scams.
An interesting question is how much of crypto’s current state is tied to fraud. Is it possible to stamp out this fraud and still have a working crypto eco-system afterwards?
It will be interesting to see how things develop. I personally think the fraudulent behaviors are more central to crypto than it is to most other industries, so eventually crypto will essentially fizzle out and die. I’m not so attached to that though that I’ll be mad if it doesn’t.
If the fraud went away there would be way less attention to crypto and the asset prices would crumble but I think thats more an indictment of how most people don't see crypto as useful than as evidence its only good for fraud.
I agree. I tried to phrase my comment in a way that allowed for that to be true.
Basically, I was saying that things like mortgages still exist even without all the fraud, but crypto may be different because it lacks inherent usefulness.
You mentioned that crypto is good for fraud because it’s a new industry. That’s true, but it’s also the case that unlike other new industries in the past, there may not be much left once the dust settles on the fraud.
A few months ago, FTX bought a stake in my former employer. At the time I was a little miffed that I wasn't able to participate in the transaction, it's a private company that doesn't pay dividends and I can't easily sell it outside of an arranged transaction like this. Now I'm glad I didn't end up with Ponzi scheme blood money.
It is humbling to know that my ex-coworkers, people who have been in the finance industry for years, people I respected and thought had well-honed bullshit detectors, still fell for FTX. Personally I've never touched cryptocurrency but figured, hey, if "my people" like this Bankman-Fried guy, he's got to be less scammy than the rest of them, right? Ha ha no.
There have been lots of frauds over time, and there will be many more. It turns out that SBF was especially good at branding himself with EA and thus avoiding some scrutiny, but Elizabeth Holmes, Bernie Madoff, Enron, and many others have done similar stuff without any particular philosophical trappings. I suppose some introspection in EA circles is warranted, but most of the self flagellation seems somewhat beside the point. Now if there were a trend of EA types defrauding people, that would be a different story. For now, this seems like a one off. Bernie Madoff is Jewish, and his fraud (bigger than FTX, by the way) didn't lead to Jewish people reevaluating the morality of their religion or ethnicity, nor should it have.
Now crypto is a different story. There IS a trend of crypto people defrauding everyone they come in contact with, and at this point my take is that the whole space is quite rotten. If anyone needs to do some soul searching, it's crypto. Or maybe just a few prison sentences will do.
I completely agree. Kind of a reverse "no true scotsman". Instead of using the group to criticize an individual, EA critics are using the individual to criticize the group.
Like Madoff, the FTX fraud is actually quite bland and ordinary as far as financial frauds go. The difference is the scale just like Madoff and the high profile of the prime suspect, like Madoff too.
I liked Cowen's take on this from this morning:
"I would say if the FTX debacle first leads you to increase your condemnation of EA, utilitarianism, philosophy, crypto, and so on that is a kind of red flag for your thought processes. They probably could stand some improvement, even if your particular conclusion might be correct. As I’ve argued lately, it is easier to carry and amplify damning sympathies when you can channel your negative emotions through the symbolism of a particular individual."
When it comes to crypto, I often think of something Scott wrote a few years ago: "if you try to create a libertarian paradise, you will attract three deeply virtuous people with a strong committment to the principle of universal freedom, plus millions of scoundrels. Declare that you’re going to stop holding witch hunts, and your coalition is certain to include more than its share of witches."
Crypto spaces advertise with "no regulations!" - now guess who ends up there?
I agree. It’s interesting to see how certain personalities are attracted to ventures that are very likely to be corrupt. Like, if you wanted to be corrupt without anyone knowing, you would maybe pick an industry not known for corruption and fly under the radar. But then you wouldn’t have other sociopath friends with whom to share your exploits.
Regarding point 5 (EA people have always condemned doing bad things for good reasons), how different is this from corporate mottos and the like? Every corporation in the world has some sort of statement somewhere saying 'we will always put the customer first' and every corporation in the world also transparently acts in a way at odds with this.
When you read a story about a corporation absolutely shafting a customer and the media print a response from the company spokesperson saying "[Corporation] is deeply committed to the principle of [not doing that]", most people don't treat that as much of a defence of the company - the bad actions speak a lot louder then the pretty words.
In this particular case I think EA is clearly more committed to the principles of Rule Utilitarianism than a random company is committed to 'Enbedding Excellence in Every Widget Sold', but I do question whether they should get any credit for that unless they actually enforce Rule Utilitarianism as a condition of recieving EA money (or whatever - I mean some concrete step that elevates Rule Utilitarianism above other plausible Consequentialist frameworks)
I'm not sure that most companies do act at odds with providing value to the customer. Most of the goods and services I buy do what I want them to. My books, groceries, car, smart phone, clothes, and other possessions are generally great. Sure, I've had some bad customer service experiences and whatnot, which isn't fun, but it's typically not a huge problem.
In situations where I'm not the payer, like health insurance, things get a little more dicey. That can sometimes feel adversarial, and I feel like stories of insurance companies screwing people out of coverage are much more common. I don't mean to paint companies as being saintly.
So yeah, companies do bad things regularly, but in general I think a well-functioning market economy leads to most customers being satisfied most of the time.
"unless they actually enforce Rule Utilitarianism as a condition of recieving EA money"
I don't think anyone has ever given EA money to people doing extremely illegal things (well, if they did, they were smart enough not to tell me about it). Someone could put a page in every grant application where you have to evaluate their rule-utilitarianness, but that just seems like dumb virtue signaling by formalizing what anyone with common sense does already.
I completely agree with your response, but (given your reply) I don't think I quite managed to make the point I was hoping to in my original comment. Unless you especially want to contain FTX to this thread I'll think about it some more and bring it to the next OT
I expect the explanation for that is probably something like "most rationalists refuse to speak in absolutes, and 'vast majority' is actually the maximum quantifier intensity they're willing to use without doing a formal quantitative estimate."
Plus ca change, plus ce la meme chose - “the more things change, the more they stay the same” , (finance+innovation+human brokenness = failure and catastrophe - it’s happened for a long long time - what’s that other saying that innovators mention? “This time it’s different”
Seems like many EAs have decided that they never bought into consequentialism/utilitarianism after all. And sure, if you look through the thousands of pages written, you’ll find some hedges against doing intentional harm for the greater good.
Unsurprising that nobody has the chutzpah to say the quiet part out loud: If SBF succeeds in destroying the credibility of crypto, ushering in regulations that prevent anonymous ownership of huge piles of capital on blockchains, that would have huge implications for AI safety, since it limits one vector through which AI may control significant resources.
I mentioned this at an EA party as a joke once, but it has become increasingly true, that crypto's greatest contribution to AI-Safety is in reducing the number of intelligent people who would otherwise go into capabilities research
I don't know if you're joking, but in case you're not:
1. FTX was pushing for more US crypto regulations and AFAICT doing a good job. If all they wanted was regulations, they could have kept doing that. Unclear whether the end result of this will be more regulated exchanges, or everyone who would have been on highly-regulated FTX going off somewhere else instead.
2. I'd be surprised if crypto came out of this so devastated that a superintelligence couldn't accumulate a pile of Bitcoins.
3. Probably not worth blowing up so many other charitable opportunities just to accomplish this one thing.
You could call me a concern troll. I don’t buy into AI doomerism; but if I did, I would definitely want to see the price of bitcoin fall, and laws passed that require KYC/AML for all nontrivial transactions.
On a serious note, I know that many EAs really believed in unbridled utilitarianism. SBF surely did. That so many EAs now be like “we never meant it that way” looks pretty questionable.
Edit: Furthermore, many rumors indicate malfeasance going back years!
Highly plausible to me to argue that technical avenues to AI safety have no logical basis (fundamentally impossible to predict what a supercomplex self-modifying system will do in the future), but through policy we may limit directly handing the keys to the economy to AI.
Re point 3. I agree that granting AI doomerism, SBF and co still *probably* didn’t make the right calculations. But how does one fairly evaluate a decision with shortterm costs and longterm benefits while still in the shortterm?
An AI that sits as an asset on a corporate balance sheet and an AI that manages its own balance sheet should arguably be seen as categorically different, requiring different regulatory frameworks.
While a business may use an AI — think of the AI as a capital asset owned by the business, like a factory — damages that AI causes become a liability of the business. By contrast, blockchains allow AIs to, anonymously on the internet, manage their own balance sheet: own other assets and take risks which may cause great harms. But because they would be anons, nobody will be liable.
While there has always existed a thriving world of secret finance (dark pools, offshore banking, shell companies, etc.), bringing all of that online, throwing unsupervised AI into the mix, and letting it run wild…would make for quite an interesting and chaotic economic environment. And probably massive systemic risks.
Now that crypto is in the news again (for something bad, like always), I have to say I'm surprised that crypto people still believe in this garbage. Like, how is it that people still say that this is the future of currency? How many more scams does it take to prove to you that crypto has no practical use (other than scamming lol) and is value-less? I mean just look at the whole community, its literally all a Ponzi scheme (see https://ic.unicamp.br/~stolfi/bitcoin/2020-12-31-bitcoin-ponzi.html by Jorge Stolfi)
There's no specific aspect of Crypto that made this possible, its a fairly run of the mill financial fraud that happens to be large in scale and involve high profile people.
Now i am not a fan of crypto anyway, but this doesn't change my opinion much either way.
The Mississippi Scheme failed, but both important aspects of it (paper money and the value of the Mississippi basin) were correct; the immediate environment in which they were proposed just wasn't ready for them quite yet. (Or you could be even harsher and say the stupid officials who caused the bubble to end couldn't tell the different between the past of money as gold coins and the future of money as a share of a productive enterprise.)
I agree that crypto has so far done a truly terrible job of explaining what it can do better than any pre-existing mechanism, whether that's contract law or standard dollars. But I am still willing (so far, though my willingness is dropping fast...) to be persuaded that *perhaps* such possibilities exist.
It doesn't help when the anti-crypto people come across as even more clueless than the crypto people – for example if you can't see the poverty in the argument that "crypto is ovious a scam because it is zero sum" then, yeah, I'm not really interested in your opinions about finance.
Now that crypto is in the news again (for something bad, like always), I have to say I'm surprised that crypto people still believe in this garbage. Like, how is it that people still say that this is the future of currency?"
I have a friend who does work in the crypto area. He owns a small company that provides programming services and is, essentially, an arms dealer to the blockchain folks.
HIS answer to this is that the use/need for cryptocurrency is lower in the US, Europe, Japan etc but has a much clearer need in places such as Russia, China, Vietnam.
In addition, folks who are seeing banks and/or VISA/Mastercard block industries they don't like (e.g. gun stores, porn sites) are starting to pay attention.
The scams are bad. But losing your money or getting cut off from the financial system by central authorities is a problem for lots of people -- just not most people in the US/Europe/Japan.
This kind of feels like someone doubting the utility of medicine, being told that they're useful for treating diseases, and then scoffing that they're just a band-aid solution for the real problem which is the existence of diseases. It's not *false*, but unless you have a good idea for getting rid of disease quick we should probably still have medicines.
Why would the economy of the Bahamas suffer because of this?
I guess FTX was a decent size company but I can't imagine it occupied more than an office building, maybe a few hundred employees. That's small compared to the size of the Bahamas, surely?
The economy of the Bahamas is heavily dependent on two things; tourism, which makes up about 50%, and financial services. This includes off shoring, being a tax haven, and encouraging crypto and e-commerce. There are always allegations that it is used for money laundering and tax dodging, and there have been clampdowns:
Tourism was very badly hit by the Covid pandemic and by Hurricane Dorian, and hasn't recovered to where it was yet. So a big scandal in the financial services sector is going to hurt.
There's less than 400,000 people in the Bahamas. The entire GDP of the islands is about $12 billion. FTX had a billion in revenue and something like $400 million in net income. And SBF and company were probably spending more than that since he was stealing money. Even if you assume he was spending a tenth of net income on the entire execute team and principal owners dividends and salaries that's about .5% of the Bahamas' GDP. Equivalent to losing more than a hundred billion dollars in the US. Not enough to be a full blown recession it's still a serious blow.
Bet a lot of money that you will commit fraud, commit fraud, use the benefits of the bet to compensate the people you defrauded, and the remaining money to fund EA campaigns.
If anyone wants to support research on oogenesis and meiosis, I'm looking for funding. I was awarded $300K from FTX regranting but it looks like they won't be able to actually provide it. My current funding will run out in August, and if I don't get more (at least 100K) I'll have to severely cut back and lay off my assistant.
The FTX situation ties in well with the fraud triangle, almost too perfectly: rationalization, pressure, and opportunity. It was all there. Donald Cressey discussed the idea of a nonshareable financial problem as part of rationalizing fraud, and I see that as a situation where SBF and crew must have rationalized using customer funds as a way to avoid admitting their own failures. There are no great lessons from this, it has happened before and will happen again.
Many people here seem to be thinking of SBF as a brain in a vat making moral calculations and failing. I see a very young man surrounded by other very young men and women in close but complicated relationships, isolated in a penthouse in a country far away from their parents and other relatives, possibly using various drugs that are sometimes known to interfere with good decision-making, and swamped with so much money that everyone from Bill Clinton on down was kissing their asses.
I defy any of you to make excellent choices in those circumstances. This is not an excuse; it's an explanation. It's not an excuse because you can always remove yourself from those circumstances when you start to feel yourself sliding down the wrong chute. But it is a very solid explanation.
This makes a lot of sense. It also reinforces how bizarre and foolish it was for VCs and other institutional investors to give him so much money after so little diligence.
Well let's bear in mind VCs are basically a giant game of craps. They *expect* to lose on 9/10 or even 99/100 bets, and win fabulously on the last one. So it isn't even in their MO to do so much "due diligence" that they talk themselves out of investing in slightly loopy ideas offered up by slightly loopy characters. I mean, if it *were* their MO they would be Bank of America, not a VC firm, and making solid money originating mortgages and business loans to General Mills.
Not to be annoying, but isn't there an argument that if you're gonna be kinda-returning your FTX-related money, it should go to the defrauded investors and not charities? Like, from the moral perspective of FTX, on the margin they should've defrauded less even at the cost of less donations. So maybe your FTX bucks should go to what they should've done.
I guess in practice softening the reputation/trust blow would require more money than anyone has, and helping some good charities is doable. But maybe that's the naive-consequentialism thing and you should do as implied by the weird deontology above.
That's the teeth of this dilemma. I don't care about the sports stadia and teams he poured sponsorship into, but there are good causes that were awarded grants out of what is now known to be stolen money. Bankman-Fried's actions affected a lot more people than just those who invested money with him hoping to make returns on their investment, and he has stolen from a lot more than just the money he defrauded.
Not sure, but I got the money in January 2021, when AFAIK they weren't defrauding investors. If I learn that they were, stronger argument for giving it to the defrauded investors it came from; otherwise, I feel okay about giving it to the people who need it most.
Also, AFAIK right now there is no way to give money to defrauded investors, it's not like they have a fund or anything.
As I understand it the bankruptcy process thing will be trying to give some of the money back (that's the thing with the clawbacks etc. right?), not sure what would happen if someone sent them money but maybe they would take it.
I am sorely confused. I thought giving to charity meant something like bags of groceries, money for funeral expenses, paying that medical bill, helping scholarship funding. I know, very provincial thinking.
Now you're saying that giving to charity means funding "medical research, or lobbying, or IA research labs." In other words my charitable giving should be to venture capitalists and lobbyists. Already a pretty wealthy group, one that I could be asking for a charitable donation someday.
Sorry, but I have no sympathy for the comeuppance of this child of uber privilege, nor for those who willingly followed him down the cripto path. And his "victims" - those that spent the money before they had it (in dollars) - well, perhaps my cold heart will someday warm and my sympathy will extend to them.
Bags of groceries and medical bills don't solve anything in the long run, you've just helped some random Joe, but did nothing to prevent their predicament from arising in the future.
This was an acceptable state of "charity" in the premodern era, where problems weren't really solvable and you could only mitigate the damage. We can do better.
The March of Dimes was a charitable organization dedicated to the fight against polio. After that was cured, they moved on to other medical causes. Lots of money is raised for research on breast cancer. My impression is that EA types regard breast cancer as over-funded relative to how many people it kills!
"My impression is that EA types regard breast cancer as over-funded relative to how many people it kills!"
And right now my impression of EA types is "Look who was your poster boy and where you followed him, so I don't think beans of your opinion about conventional charities. So long as the local group doing the Pink Ribbon thing is not secretly funnelling the fundraising to compounds in the Bahamas, I don't care".
Pretty sure that when we "Look who was your poster boy and where you followed him" it was towards funding exactly the sorts of things that EA was already funding. If you have an object-level criticism of how EA sets its priorities and why we should disregard their thoughts on breast cancer research funding, that's certainly appropriate (and plausibly warranted). But, you're going to need to demonstrate why this comment is anything more than uncharitable dunking.
As someone raised Catholic, I'm somewhat sensitive to the fact that people aligned with one's community don't always act in ways that are perfectly ethical. Given your moral commitments, I'm curious to what extent to you believe that malfeasance by someone supporting a nominally charitable organization ought to impugn the stated ethics of that organization.
That might be the case . I don't know what the numbers are -- do you? Probably would save more lives to devote that money to providing people with mosquito nets or something.
" And his 'victims' - those that spent the money before they had it (in dollars) - well, perhaps my cold heart will someday warm and my sympathy will extend to them."
Imagine you are running some organization (maybe doing medical research) and you get a grant to pay for one researcher for five years. You'll get the money at the beginning of each year for the next half-decade.
If the grant provider goes bankrupt after the 1st year, you're not going to have money for the researcher and will have to let him/her/it go.
Are you proposing that folks wait until all five years of grants have been delivered before hiring the researcher? Or what?
No. Just perhaps some assurance that the money was available. Universities (to my understanding) do this regularly - negotiate the funding and the term, then dole out the salaries to the researchers.
I just want to vent here. I was aware that the week before last Alameda's balance sheet had leaked and had red flags. I knew a lot of people had withdrawn from FTX. I had a hardware wallet ready for long term storage. I had moved funds off Binance last year when a similar scare happened. And despite all that, I didn't withdraw from FTX partly because SBF was EA-aligned, he'd donated so much to EA causes, so there was no way the funds weren't there. I'm learning some tough lessons about my naivete.
I've lost all my crypto. A good chunk of my net worth. But worse than the monetary loss, I feel so betrayed. That loss of trust is devastating.
The smallest of things outrage me. SBF retweeted someone implying that there'll be an airdrop for those who didn't withdraw there funds. After withdrawals stopped, after Binance refused to buy FTX Intl, after everyone knew that client funds must be fucked, that he'd been lying so far, he categorically stated that FTX US was US regulated and totally solvent, and mere hours later FTX US declared bankruptcy. Lying to the bitter end.
All this about St. Petersburg is irrelevent. He could have doubled down forever if he wanted, blown up Alameda to smithereens, and everything would still be dandy if he hadn't touched customer deposits. That was the crime. He touched money that wasn't his to touch. That's it.
Thank you for writing that. I'm not a fan of EA and found myself emotionally unaffected by the FTX collapse. Your writing brought me back to earth and I can empathise with you. I wish you well.
I am very sorry to hear about your financial loss, and your well-justified feeling of betrayal.
I have no money in crypto or in the stock market, and honestly wasn't feeling much sympathy for those who lost in the FTX debacle.
Until I read your post.
The fact that you chose not to grab your assets and run because you believed in SBF, and were appreciative of his EA contributions, indicates you are a person with better values and ethics than most of the over-lauded crypto captains.
I hope you eventually grow your financial wealth back greater than it was before this shameful event, and that you keep and grow your sense of altruism as well. It's certainly worth more than imaginary money.
Thank you for opening my eyes to the harm that this has caused to undeserving people like yourself.
I’m sorry to hear about you losing money. Did you not realize crypto is at best gambling and at worst fraud? All of it. Money in a bank is safe. Money not in a bank is not safe.
Well, people may have been prepared to lose all their money due to crypto assets going to 0, but even if you cashed out to US dollars, you lost all of that too if it was on FTX.
And even if you cashed out to US dollars and sent it to a real bank months ago, it seems there is some chance it gets clawed back during the bankruptcy (90 day window apparently).
One possible take is "reporters for the New Yorker are sometimes right". I'm talking about that article that was sympathetic to EA (and to MacAskill in particular) but showed SBF (to me) as someone the perniciousness of whose influence could be seen from miles off.
Happy to take criticism of the scandal markets and improve the resolution criteria. Manifold questions can be edited as one goes. Happy to take suggestions of other high profile people.
Seems like the kind of thing that sounds good in theory, but terrible in practice.
Wealthy people can manipulate the markets if you let them bid. I suppose you could have a rule that people are not allowed to bid on themselves, but if you start a market on someone without their consent, you can hardly complain if they ignore your rules.
If the goal is to increase trust in EA having a page where you explicitly talk about how untrustworthy everyone is which is off by orders of magnitude from base rates of fraud is a terrible idea.
* The psychological effect of being made a market's target (for no reason other than someone thought you should be tracked like this) and then seeing your apparent social credibility being updated in real-time.
* Related to the above, the potential for abuse a la Goodhart's Law. If you're rich you can buy up a lot of "YES they're a fraud", and really hurt someone's reputation among people who are watching that market. Or you can buy a lot of "NO this person is great" on yourself to lower people's suspicions.
* Encouraging people to outsource their feelings about their own personal connections to the market, including by Scott who apparently thinks a threshold of 33% (!) is enough to cut someone off or justify the refusal to do so.
These criticisms boil down to the fact that these markets will never be efficient, and this will create a lot of problems.
My main concern was the short date. In 2019, a market for "would SBF be involved in fraud by 2021" would have been negative. Why not 2030?
Many of your markets seem to be at around 3%. If it's 3%/year, does that mean it's 30%/10 years, or is there a "fraudulent" personality trait that you either have or don't and they're saying there's a 3% chance you have it regardless of time scale? I don't know but I think it would be an interesting question.
EA has missed an existential risk to itself that (in hindsight) was a fairly obvious problem. Should this affect our confidence that EA can make credible claims about the far- future, such as the risk of AI Catastrophe or the value of future human utility?
I'm not sure myself because I've never felt longtermist EA has paid enough attention to structural uncertainty analysis. But I'd be very interested in other perspectives
I agree. There are way too many variables for any group to accurately predict the utility of long term actions, even facially and unimpeachably good ones. The law of unintended consequences prevails and we know where the road of good intentions can lead. Think long term but act short term.
I don't really see why. Nobody in EA claimed to be good at evaluating the financial health of companies.
Suppose that a climate change mitigation charity took FTX money and got burned. Would you ask "How can these people claim to predict the future risk involved in global climate if they can't even predict the riskiness of FTX?" To me this would be a non-sequitur - they might be perfectly good climatologists and just not experts in finance and regulation. I think of EA the same way.
That's what I initially thought too, but I think the issue gets thornier the more you worry at it. For example:
1) EA community development is an explicit EA cause area. So I think it is fair to say that 'risks to EA itself' is something EA claims expertise over, in a way climatologists don't
2) Within 'core' longtermist priorities such as AI risk there are quite strong statements made about finance and regulation (e.g. predictions of how much AI will improve the productive capacity of the world). I don't think you can have it both ways; either EA is expert on finance in which case it should have identified FTX as a governance risk to itself (which should lower our confidence in their predictions) or it is not, in which case it should not be making statements about what finance will look like under an AI future (in which case we should lower our confidence in their predictions)
3) In general we actually might think less of climatogists if they all fell for a Nigerian email scam or something. However climatologists also have an incredibly impressive predictive record eg telling us when it will rain. Longtermist EA doesn't have a predictive record because they are predicting one-off events in the far future. Therefore we should Bayesian update harder against EA than climatologists, because we don't have a strong a prior on 'EA is stunningly predictively accurate regarding risk assessment'
Wiser people than us have said things like "Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother's eye" and "Judge not lest ye be judged".
The standard of "if you have ever done an 'immoral thing' ever" is impossible to live up to, and the insistence on enforcing it (or more precisely using it as a weapon) is what drives "cancellation".
Are you sure today that something you consider normal will not be considered taboo, or at least something you try to hide, in 2050? Owning a pet? Eating meat? Having an abortion? Driving a gasoline car?
I'm not sure exactly what you are arguing here. Can you clarify? Cancel culture is not being approved of, nor is the idea that anyone has lived perfect lives, nor the proposition that only the perfect can presume to judge-an impossibly high standard. Future ethics or morals are not at issue, so I'm unsure what your argument is... I only ask is if immoral acts were committed, could it not be the case that the actors were still ethical beings? Sorry I did not make that more clear.
Cancel culture is FREQUENTLY about retrofitting the instantaneous ethics of today to the past.
I mean, FFS, there are multiple movies and TV episodes that are no longer broadcast because they involve blackface.
And I'm not talking Amos n Andy, I'm talking things like It's Always Sunny in Philadelphia with episodes from Season 9 (2013) and Season 14 (FFS, 2019!) literally cancelled from replay for MAKING FUN, IN A DELIBERATELY KNOWING WAY, of blackface.
We've moved from "blackface is bad when it makes fun of african-americans" to "blackface is unacceptable under *any conditions whatever*, no matter what the motivation". And damn right this is all retro-active.
We have become a society incapable of two-dimensional thought. If an artist is discovered to have held the "wrong views" that artist's entire corpus of work is deemed unacceptable. We are incapable of distinguishing between different dimensions of a person's life, eg between their personal beliefs (or at least our projections of what we think those beliefs were) and any other aspect of their life – their political career, artistic achievements, business creations, etc.
"can one do immoral acts and yet still be ethical?"
The current judgement of society is: "No. if you are discovered to have committed one of a set of particular offences [to be retrofitted as we wish] you are cancelled".
You can read the question in other ways; I'm given you one answer to one way of reading it, a reading that's relevant in this context (how "society" feels about someone who committed an immoral act).
crypto isn't yet one of the deadly sins, it's not yet an -ism. But maybe it will be in 10 years?
I think you're conflating two different things in your remarks about blackface.
It's perfectly possible to think that (1) so-and-so, who did a TV show in blackface in 1965, did not have evil intentions and was no worse morally than everyone else around, but also that (2) it would be bad to re-run that show _now_ given present-day attitudes to blackface. Or that (1) so-and-so, who made fun of blackface in a particular way, did not have evil intentions etc., but also that (2) it would be bad to re-run his show because some viewers would be upset by it.
In other words, declining to re-run some past bit of TV is not necessarily a moral judgement about the people who made it. "Cancellation" in the sense of "not showing their stuff on TV any more" is not the same thing as moral condemnation of the person.
(Note: I don't mean that it never turns into that, of course. But you can't get from "X is no longer broadcast" to "the people who made X are being judged immoral".)
The technique of attacking one minor part of an argument, and hoping that that counts as dealing with the larger point may work well against many internet opponents, but to me it mainly signals that that an author incapable of engaging with the primary argument.
Even the point you raise is so mired in self-selected concepts that it doesn't prove anything like what you think it proves. We don't broadcast things because "some viewers might be upset by them"?
And who decides which viewers get protected and which do not? I have a long list of things I'd prefer appeared less often, to never, in movie and TV content, but no-one's asking my opinion...
Likewise for the flip-side. What if I am a completionist, who just wants to watch every episode? What if I want to watch Louis, which was widely praised back in the day, just to see if it's as good as was claimed?
(And, stop damn pretending! Individuals who appeared in blackface have been roundly condemned. Do you seriously think Justin Trudeau is ACTUALLY a racist? So why pretend this?
Likewise various people in countries with completely different racial histories to the US who have been piled on by US obsessives. Zwarte Piet is on his way out in the Netherlands, for better or worse, but does anyway seriously NOT expect that come 2030 some part of politics in the Netherlands will involved dredging up videos of someone in blackface in 2018 and making claimas based on that?)
I tried to lay out my full argument, not just blackface but the whole point that much of the current US deliberately and maliciously refuses to accept a distinction between an individual's morality (if that morality involves one of the 3rd-rail sins) and that individual's other accomplishments; AND that is morality is capricious and constantly changing; you chose to ignore it.
I wasn't claiming that anything I said "counts as dealing with the larger point". You said something about blackface, I thought it was wrong, I said so.
(I reject the implicit claim that claims other than one's "larger point" should be immune to criticism. They absolutely should not.)
And I didn't pretend that no one gets condemned for appearing in blackface. I didn't even pretend that no one gets condemned for appearing in blackface in the past when norms were different. What do you think "I don't mean that it never turns into that, of course" meant?
I am in agreement with you in some of your observations But I'm pretty elderly and have never really grasped blockchains, Bitcoin, mining and crypto currency. It all seems very complex and somehow fragile and floating on nothing but future expectations. But it made people nominally wealthy, so it improved some lives, and some apparently used it to try to improve the lot of others, so I can't cast the whole enterprise and it's actors into some kind of outer darkness! It's said that FDR saved capitalism by regulating it and perhaps the crypto currency industry needs regulation as well?
The "nobody could have seen the red flags" argument doesn't really stick for me, because as of now and the foreseeable future, everything involving crypto is one big red flag.
I don't have any financial background at all, I'm just a tech guy following the news. I grant that Bitcoin and a few other early blockchain technologies may have started out with only the best and most visionary of intentions by their creators - even if these ultra-libertarian are themselves questionable, they do have merit and are worth considering. However, I find it undisputable that the honeymoon is over, and whatever castle the visionaries intended to build has long since sunk into a bog of outright crime and games of finding the bigger fool, with basically no redeeming qualities. Blockchain technology itself is either ecologically harmful (Proof of Work), or has a self-defeating design (Proof of Stake).
The heuristic would have to be: If you have been promised a grant by a crypto company but don't have the money yet in cold, hard cash (nowadays meaning a USD/EUR/etc. figure in a computer), you're at substantial risk of losing it and at least hurting your project. If you do have it in hand, you have almost certainly accepted the proceeds of crime, barely legal deception, or a greed game with substantially negative externalities, and all of it with a fairly short logical chain between yourself and those unacceptable events. In either case, according to your point 5, the result would have to be to refuse the money.
I sympathize: being offered an essentially blank check to work on something that is truly important must be irresistibly tempting. I'll never have to make such a decision myself most likely, and I'm glad for it. And yet, I don't have to be a financial genius to see that any company built on crypto money is built on sand, or is part of the scam itself. At this moment, when cryptocurrency companies have not yet had their ultra-libertarian abuses excised by regulation and turned into boring old financial institutions, it really does not matter whether you take money from FTX or any other player. You would be gambling your project or its soul on crypto just as much as anyone else.
This is just blatantly false. Blockchain technology is used in all kinds of non currency ways that are extremely useful. It may be true that a Blockchain based currency will never be successful, but the idea that the Blockchain concept itself is a useless scam (aka "snake oil") is completely ignorant of the real world facts.
Please provide an example where Blockchain is used in a way that is critical to the project and produces value without a superior non Blockchain equivalent.
-edit- this was with <5 minutes of googling. There are a lot more articles, many of which I'm sure are hypothetical fluff pieces, but I'm sure this is not the only case of an actual currently-in-use example.
Blockchain is not critical to this project. This is Walmart providing an API (written by IBM) and requiring that its suppliers use that API.
"Something like this wouldn’t have been possible without using a blockchain-based tracking system. No other technology is capable of guaranteeing the full immutability of all data while still maintaining full transparency and ease of traceability." <- While this claim is true to a point, given that the blockchain in question is run by IBM, they could develop a solution using Kafka (I'm biased by my love of Kafka Oriented Architecture, normal people would recommend a normal SQL database but that's less fun) and standard authentication in half the time (although IBM would probably have to charge less money for this service).
I'm skeptical that there more than a handful of technologies on the planet that are _literally_ the only way to accomplish whatever it is they are being used for. Walmart clearly was convinced that this had enough advantages to be worth at least trying.
There is always more than one way to skin a cat. I'm comfortable that I have done more than enough to disprove the original claim that "the technology behind it are anything else but snake oil. "
I'm not into crypto. I don't (and have never) owned any crypto currency, I don't spend any time researching it, and I basically only know what I hear about on reddit and have always been skeptical of it's value-proposition as a currency, but I'm sure that actual crypto enthusiasts could come up with dozens of non-currency applications. The fact is that a publicly (for variously wide/narrow definitions of "public" as the Walmart example shows; you and I can't go verify their ledger) reproducible record is a feature that, as far as I'm aware, nothing else has. That is not a feature that is useful in all situations, or probably even in very many situations, but it is novel and I don't buy for a second that it's a feature that isn't useful _anywhere_
I think I still don't believe this - I would treat an offer of money from Coinbase about the same way I would treat it from Meta or Google. There really are gradations; the FTX thing gives me some evidence that they're not as strong as I thought, but not enough to totally overwhelm my prior understanding.
Or, as someone here put it, "there is nothing particularly valuable about crypto except as an unregulated sub-economy for people worried about badly-intentioned-authoritarian or well-intentioned-regulatory interference in the regular economy". And "people worried about well-intentioned regulatory interference", includes both criminals worried that the regulators will catch wind of their criming, and people doing legal but insanely risky things that sensible well-intentioned regulators would stop. If a nonstandard medium of exchange is particularly useful to those groups, then those groups are going to be vastly overrepresented among users of that medium.
So, someone promising to fund your charity with their crypto fortune, should be viewed in about the same way as someone promising to fund your charity with briefcases full of hundred-dollar bills. Maybe they're legit, but the odds just increased substantially that you are dealing with crooks, crazy risk-takers, or people who are OK with doing business with crooks and crazy people.
Aside from the ethical question of whether it's OK for a charity to take money that they suspect might have been but don't know for sure was illicitly obtained, from a practical perspective you'd want to A: not make any firm plans around spending that money until you have it in hand and B: not count the money as being "in hand" until you've got it in your own dollar-denominated bank account or at very least an offline hardware wallet.
"Once I heard that the CEO of the major crypto exchange company helped the FTC write the regulations for his own industry, I thought, ‘Well now, that’s a bank, straight up!’ "
Also Scott: " if you try to create a libertarian paradise, you will attract three deeply virtuous people with a strong committment to the principle of universal freedom, plus millions of scoundrels. Declare that you’re going to stop holding witch hunts, and your coalition is certain to include more than its share of witches." https://slatestarcodex.com/2015/07/22/freedom-on-the-centralized-web/
The core of my argument is that it doesn't matter whether it's FTX, Coinbase, or Alphabet/Meta. If you know you're getting crypto money (or fiat directly converted from crypto) in any shape from anyone, the source of that money was most most likely crime or unethical* gambling. Do you also not believe that?
* Unethical compared to e.g. the stock market because crypto has no meaningful economic function, or compared to casinos because of their negative externalities (mining).
I'm using Google and Meta as examples of non-crypto companies.
And yes, almost sure the majority of existing crypto doesn't come from crime or illegal gambling, unless you mean that by definition all crypto investment is gambling.
Also, disagree about crypto having no economic function. I have been trying to help members of the ACX community stuck in Russia, and I was able to send crypto to two people that helped them escape arrest or conscription. I'm really proud of this and it looks like a lot of crypto is being used for remittances or stuff. The media only talks about monkey gifs because that's the kind of people they are. I won't claim the good uses are an outright majority but I still think crypto is +EV for the world.
As a person currently stuck in Russia and evading conscription I totally endorse such use of cryptocurrency.
That said, I'm worried that this mechanism is just as successfully used to nullify the effectiveness of economic sanctions against Russian elites and financing Russian efforts in the war against Ukraine which seems to be a much bigger deal.
The existence of cryptocurrency - an alternative financial system evading the coordination mechanisms of conventional financial system is net positive only if the coordination effort of conventional system are net negative. Do you think this is indeed the case?
There can be good uses to crypto, but if you're talking about taking donations from a crypto company, those donations are probably coming from the more profitable side of the industry, which disproportionately consists of scams and Ponzi schemes.
My guess would have been that the average FTX customer was "Superbowl viewer who bought some Bitcoin hoping it would go up", not "savvy scammer". If it was mostly savvy scammers, I wouldn't care about them losing all their money.
Hm, something seems lost in translation here. You're not getting the money from FTX customers, you're getting it from FTX. And the most profitable part of FTX is not the transaction fees, but the complicated options trading that borders on scams and Ponzi schemes at best and requires them to offshore.
By contrast, by far the most profitable part of Alphabet is serving ads on Google searches, which is providing actual value in connecting buyers and sellers.
Aren't you the author who explained here first in ways that made sense that the basic model is an attempt at a virtuous-because-obcious pyramid scheme? You two might be disagreeing about the moral tone of that portion if so.
Assume you work on an EA project that would benefit from additional funding. You are approached by a potential donor who wants to give back to their community by financing your project. The donor is known to be 100% reliable with his pledges. The donor is also a Mafia don, whose source of wealth you know to consist of both legal/ethical and illegal/unethical business, both making an unknown but significant fraction of the total.
Would you accept the donation? Why or why not? What other questions would you want answered to help you decide?
Edit: I guess what I'm asking is, how many degrees of separation are enough to satisfy your argument #5?
1. It's possible that him transferring the money to your charity serves or whitewashes his criminal acts - he can tell people "Look, I donated to charity, I'm a good guy, you should support the mafia".
2. It's possible that he only committed his crimes in order to give to charity, so that in some weird logical counterfactual sense you could prevent the crimes by not accepting the money.
I think with SBF, 2 is very likely true; for most Mafia dons, I would expect it to be false. If the mafia don was willing to donate anonymously, I would probably agree it was ethical, although still possibly refuse out of PR considerations depending on what those were. If he wanted me to name it after him or something, I would probably directionally think it was unethical, although in some situations I might still accept, like if he wasn't really *that* bad a guy, the whitewashing ability was pretty low, the PR risk was low, and the need was great.
I admit this is less principled than always refusing would be, but it's my honest answer.
That seems overconfiden / premature at this point. Or at least there are many alternative explanations; e.g. Yudkowsky's speculation on the EA forum:
> I'm actually a bit skeptical that this *will* have been done in the name of Good, in the end. It didn't actually work out for the Good, and I really think a lot of us would have called that in advance.
> My current guess is more that it'll turn out Alameda made a Big Mistake in mid-2022. And instead of letting Alameda go under and keeping the valuable FTX exchange as a source of philanthropic money, there was an impulse to pride, to not lose the appearance of invincibility, to not be so embarrassed and humbled, to not swallow the loss, to proceed within the less painful illusion of normality and hope the reckoning could be put off forever. It's a thing that happens a whole lot in finance! And not with utilitarians either! So we don't need to attribute causality here to a cold utilitarian calculation that it was better to gamble literally everything, because to keep growing Alameda+FTX was so much more valuable to Earth than keeping FTX at a slower growth rate. It seems to me to appear on the surface of things, if the best-current-guess stories I'm hearing are true, that FTX blew up in a way classically associated with financial orgs being emotional and selfish, *not* with doing first-order naive utilitarian calculations.
> And if that's what was going on there, even if somebody at some point claimed that the coverup was being done in the name of Good, I don't really think it's all that much of Good's fault there - Good would really not have told you that would turn out well for Good in even *naive first-order thinking*, if Pride was not making the real decision there.
There’s definitely a different perspective you get coming at cryptocurrency and related financial instruments if your primary exposure to it as a business is ransomware. Even some people who made large amounts of money with early BTC purchases developed a lot of skepticism when it evolved from a drug-purchasing currency to a ransomware currency.
I know it’s an unfair assessment (criminals use cash too! And there’s no public ledger for cash!), but it was still jarring to see.
It's probably sane to accept money someone in crypto wants to give you (it can't more than fail), but to otherwise not even touch the stuff, and also not to depend on any money you have not yet actually received *and* converted into the kind of money that you can in fact count on working.
Seems like a good idea but might not be enough. Apparently it still might not be "real" if it eventually it gets reclaimed as part of the bankruptcy process. In which case it ends up sort of like a loan?
make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively be emotionally dead to anybody who goes above a certain threshold."
I am in a red state/rural area in the US that has gotten flooded with more people. It's tempting to say this has happened post-pandemic, but census data from early 2020 shows this isn't true.
I believe the area has been steadily growing since the early 2000's, with a slight slowdown during the worst of the 2008-2012 recession years, but I think it intensified shortly before the pandemic, and absolutely exploded during the pandemic.
The result is that my little hometown, which has had problems with infrastructure for as long as I remember, is really struggling to keep up with the growth. We have a set of problems that have been building for a long time: not enough police, not enough DMV employees, roads built for much less traffic, and the big one: not enough housing.
The area isn't actually super conservative. There's a handful of locals that like to remind others that this has traditionally been an area that votes conservative federally, but elects politicians from both sides to state and local offices. The area has also been pro-union, etc. in the past. On the other side of things, there's a loud element on social media saying things like "Go home, Californians", (This sentiment has been around for a while, but seems louder to me.) Online, in places such as local facebook groups, there's anger surrounding wealthier people moving into the area and pricing out people who have lived here forever. This results in people being upset whenever higher-priced, non-affordable housing is built. There is a slightly different group of people upset when cheaper apartments are built in neighborhoods where it might cause them inconvenience. Sometimes I just want to comment in caps lock "DO YOU WANT HOUSING OR NOT?" but I refrain.
This political debate is happening in the face of the weird economic stuff happening in the US recently: companies having difficulty finding service-sector employees and people having difficulty finding rentals and buying houses are the big ones. Homelessness is rising, too.
The difficulty finding service-sector employees is explainable enough, since many of the people that moved here are upper middle class people who work remotely, and some service-sector employees are getting driven out by lack of rentals. The weird part is that property taxes are going up due to rising home values, and there are lots of new homeowners paying property taxes, and yet somehow, this money isn't enough to support the local government. A few of the schools are filled to the brim and don't have enough physical space to put students, there are roads that are way overdue for expansion and maintenance, and yet somehow, in spite of all the new money sources, it's just not enough. (I know how this sounds. I have reasonably high trust that my local and county governments are not corrupt, but I haven't dug into this.)
People have been "Voting with their feet", and the lenient COVID policies, general good management, and high livability of my area has gotten voted for, loud and clear. And yet the good management and high livability are shrinking because of the influx of people, and there doesn't seem to be any easy solution. Some of the problems don't even seem to have clear causes, such as the "not enough property tax to support the county" issue.
I am wondering if any ACX readers have any thoughts on this, or have seen their hometown go through something similar. I know mine is far from the only town experiencing a culture shift, housing issues, and infrastructure problems post pandemic, but I am wondering: Is there a good way to handle it? I lean conservative, as does my town. Everybody is in horror of the area "Turning into California" (i.e. high taxes, high prices, high regulation, high homelessness. I realize this is a generalization and more reflective of one or two areas in CA rather than the whole state.) Are there good solutions to this that don't involve raising taxes, prices spiking even higher, or fun new laws and regulation, such as restrictions on vacation rentals?
A previous place i lived was a very desirable place to live. But was very against building dense housing. Therefore the housing prices increased (and nearly doubled in some parts during 2020/2021) over the past 10 years.
A nearby town (in a much more conservative area) had about 800 residents 20 years ago. Its not 10,000 because this town was willing to building housing and people were looking for place to live. Luckily I think they have been able keep up with the infrastructure.
So my take away for you is that your towns problems are due to some other place not building housing and its unlikely that your town can fix its own problems. Unless something drastically changed in your town (like a major new employer opening or a great public amenity) people probably didn't view it as their first choice to live. But they couldn't afford their first choice so they went to your city where they could.
Also, does your town control its own budget or is it part of a county? different states handle this differently but it could exacerbate the problem depending on the competency of those in charge, the budget size, access to debt, etc. Infrastructure changes take a long time. Its much faster to build 1000 new houses than to build new schools, fire stations, roads, etc.
On top of this, a city is very unlikely to build excess capacity for things like schools even when they should. In my hometown, they tore down my elementary school to build a new one. The new one opened and they already needed portable classrooms to add capacity. They have since expanded the school twice. And this is one of the richest areas in the country with very high taxes. Money was definitely not the issue.
Gee, it's almost like Paul R. Ehrlich might have been on to something...
I have nothing more to say. There is no subject on god's green earth that makes people stupider than this one, but, yeah, behavior has consequences. WTF did people THINK their kids were going to live? On Mars?
Considering humanity still exists, Paul Erlich was most assuredly NOT on to something. There aren't many "experts" who cling to credibility after being more wrong than "the internet is no more transformative than the fax machine," but he's one of them.
Any correct prediction Erlich made is probably coincidence, given that his predictions about the consequences of population growth ranged from "directionally correct" to "diametrically opposed to reality".
Paul R. Ehrlich really wasn't onto much, as it turns out. The world already produces enough food for there to be zero famine, too much of it just gets doesn't get where it's needed. Repeated studies show that 40% of food in the U.S. ends up in the trash at some point in the farm-to-fork chain. Close to half the countries in the world have birth rates that have fallen below the level of replacement. China's population will peak soon and decline.
Yes there are a handful of countries that still have rampant, unsustainable population growth, but the idea that "there's just way too many humans for the planet to ever support" is been shown repeatedly to be untrue. People can't all live in the the same few most desirable places, but Earth is still vast and humans get increasingly efficient.
The number one agricultural exporter in the EU is big Spain. The number two? The tiny Netherlands, because they grow so efficiently in greenhouses, vertical farming, etc.
Humans on Earth face some pressing problems -- but long term, overpopulation is not one of them. Birth rates fall naturally as women achieve higher rates of education, and except for a few outliers like Afghanistan, women and girls are getting better educated the world over.
ALWAYS we get this. That food isn't running out. Like that's the important thing.
What's running out is "footprint required to sustain a certain level of lifestyle".
Greenhouse gases -- much less of a problem with fewer people.
Species destruction -- inevitable if we insist on using all land everywhere for human purposes.
Look at the post I was replying to – a claim that we need to have denser development because it's "more efficient" even though most people don't want it. We wouldn't need that "efficiency" if we stopped creating new people each of whom moves on (in time) to need their own housing etc.
But I'm not interested in arguing about this. I have spent my entire life talking to people who insist that saving that saving wildlife or dealing with greenhouses gases are vitally important problems but who ALSO insist that they have nothing to do with population.
You lot can have your stupidity and the inevitable consequences; I have better things to do that talk to a brick wall.
What rules? I'm not aware of any zoning rules in NYC that say you can't build residential skyscrapers anymore. And apparently, neither are the builders:
Large swathes of New York are covered by "historic preservation". And, not specific to NYC, but the feds have encouraged neighborhood-level "participation" in development decisions that foster NIMBYism in cities all over the country.
Yes, a stable genius like yourself must have better things to do than call anyone who disagrees with you stupid.
Why bother with an open comment thread when contrary comments leave you sputtering with impotent indignation?
Sounds like you've spent your life working hard on solutions to help others. Or maybe just being disagreeable and argumentative. Reminds me of a grumpy old
dude I know that chalks up his lack of a partner, children or a big loving circle of family and friends as "not wanting to contribute to overpopulation." Instead of "People don't seem to enjoy my sour company much."
> the entire population of the Earth could fit comfortably into a mid-size American state
Tokyo population density= 6158/km2
8b at that density gives you 1.3 million km2
That's twice the size of Texas, or approximately the size of Germany, France, BeNeLux, the Alpine countries and Italy combined, which is remarkable really.
So quite a bit larger than the average US state! Sure you can go denser than Tokyo (at the density of Hong Kong island itself, they'd nearly all fit in California), but anything denser than a famously sprawling yet dense city seems untenable.
I honestly believe that somehow we forgot how to build infrastructure in a reasonably inexpensive way. It’s worse in some areas but overall large projects have run into some disease that makes them cost prohibitive. Some combination of too many veto steps most involving courts, environmental rules, nimbyism, etc etc ad infinitum.
It shouldn't be that way in my conservative, fairly low regulation area, but I think it is, to some extent. We do have some pretty significant nimbyism, as well as mild-moderate environmental rules. I don't know if it has gotten significantly more expensive in recent years, or if something about cashflow has just gotten harder.
Hypothetically, it's not that we exactly forgot how to build affordable infrastructure, we forgot how to keep from taking a good bit of what could be taken.
Spread-out infrastructure is very expensive in the long run. Strong Towns has done some work on measuring this and not just bikes has some videos on their work (e.g. https://www.youtube.com/watch?v=XfQUOHlAocY, https://www.youtube.com/watch?v=7Nw6qyyrTeI, https://www.youtube.com/watch?v=VVUeqxXwCA0). The only solution, unless the town is very wealthy, is to allow dense development everywhere. Get rid of single-family only zoning and parking minimums, put in some bike lanes, restrict cars downtown, maybe build a tram if you have enough people. And the only way to keep the price of housing down is to build more housing.
I've lived in a relatively dense, well-off suburb for a decade. We have plenty of townhomes and apartments, and we have bike lanes which nobody uses, and if there's a parking minimum, it's already ridiculously low for the number of cars.
The biggest single problem with increasing density and getting rid of cars for most Americans that don't already live in an inner city is probably the role of the supermarket in American life. We've come to expect the selection of food that comes from a large supermarket, and the modern supermarket is impractical without a car.
I live close enough to walk to the supermarket, and I do walk there on occasion, but only when I need one or two items that I'm missing or otherwise have to have right now. Otherwise, I make a trip once a week via car, and regularly have trouble getting my week's worth of groceries in from the car in a single trip. I don't drink soda or beer, but having to carry a six-pack on top of what I already have would make it impossible. And I'm single, and frequently eat at the office, so my grocery bill is fairly short. The issue of weather has already been brought up, but there's an additional layer of challenges in dealing with a family besides the massively larger amount of groceries required, such as laws restricting your ability to leave kids alone to run to the store.
(If the biggest driver of suburban living in American life is not the role of the supermarket, it's almost certainly making educational quality more of a determinant in where to live than where you work.)
I don't think anyone should feel guilty about making a weekly grocery run, if that's all you do. People get excessively binary about this. A community with one car per house that mostly stays parked would be doing much better than average, for the US. That's like a retirement community where people hardly drive at all.
It's not very practical largely due to commuting and other trips that people want to take.
[Caution, the following is somewhat fresh off the train of thought, and thus half-finished]
I think it ultimately boils down to people and the goods they consume need to travel more.
I have relatives in small east coast small cities and large towns that predate the car and were built for mining or industry. The city/town had a factory or mine where half the people worked. The rest worked in town doing the other necessary jobs. It was easy to determine where to live: within walking distance of the factory/mine, and probably closer to the church of your particular denomination. This scales somewhat as trains / cars are introduced. If the factory was big enough that it required more people than the town could manage, a rail / road network appeared with the factory at its hub.
The push away from industry after WW2 changed things, and the many changes to labor since have further pushed the transit systems. Instead of having a lot of big factories, you have a lot more smaller industries and commercial offices. These are spread out a lot more, and the hub and spoke system doesn't work as well. The following smaller component changes are the ones I've thought of:
1. As population density of the city increases, the land at the center (the hub of the transit network) goes up in value faster than the land on the periphery, making it cheaper to start a new business at the periphery.
2. Logistics has changed. Intermodal transport means that more cargo is carried on trucks, at least for the last mile, and factories don't need to be built on a rail line.
3. We have a lot more women working in the workforce and two salary households. It's easy to live near where you work with a one salary household. If both adults are working, the odds are high that one of them will have a commute.
4. People switch jobs more frequently. I live in the DC suburbs and there are tech jobs scattered all over the place. I'm fortunate to live close to where I work and the odds are if I switch jobs it will mean a significant increase in commuting miles/time.
5. DC has a comparatively decent hub and spoke mass transit system that's great if you want to work at/near the hub (the Pentagon, for example), but lousy at going spoke to spoke, and even extending the spokes takes an inordinate amount of time/money/effort because the area is already developed by people/jobs that didn't want to pay to be downtown.
6. Once you've assumed that someone is going to have to commute to work no matter where you live (since you have two workers), you may as well choose the best possible location, which if you have kids is probably determined by the quality of the school system. While school quality is probably tied to housing quality, it almost certainly doesn't tie in to the job quality; expensive areas with good schools still need low-wage baristas and supermarket employees.
7. Immigration pushes communities to live close together even when they don't necessarily work close together. This was the case before World War 2, but immigrants are needed to do the messier low wage jobs, in part because they aren't used to the higher standard of living.
This all sounds plausible, but I can't square it with the actual history of urban and suburban redesign, especially in North America. Starting shortly after WW2, new car-dependent developments were built outside cities, in an entirely new pattern of development. At the same time, the interiors of cities were forcibly demolished to make way for highways and parking lots. This was mostly due to the increasing popularity of cars (and, depending on how charitable you are, racism); it was not a response to 2-income households (as far as I know, women didn't join the labor force in large numbers until the 70s) nor spread out employment (most people still commuted into the city; suburban office parks didn't become popular until the crime wave in the late 60s or later). For the same reason, living near work wasn't a priority, at least as far as I know.
Suburbs always existed, and always had plenty of businesses. In fact, they used to have more, because people had to be able to walk to the stores! Separating stores from housing is new, and in contrast to your statement that
> expensive areas with good schools still need low-wage baristas and supermarket employees
many suburbs actually ban *all* commercial development, and thus don't need any such employees at all. All of the stores are far away from the most expensive areas. One of the more well-known is https://en.wikipedia.org/wiki/Los_Altos_Hills,_California, after it got into a fight with Waze.
A large number of people don't want to live in the city. Once the middle class among them can afford cars, there's an incentive for some of them to commute via car. Once they start moving out from the city, there's an incentive to build stores and services for them, and an incentive to relocate jobs to be closer to the workers. This incentivizes more workers to move to the suburbs, and the cycle repeats. It's not hard to see why the end of World War 2 might have been a specific trigger for this process to take off; you have the (practical) end of the Depression / New Deal, you have a large number of young men coming out of the military that need jobs and places to live, you have heavy industry that's coming off of war production, and you have the US with the worlds largest functional economy.
If you start from an existing city with a reason for a dense core of jobs that can't go away (Wall Street for New York, the federal government for DC, port facilities for other cities), the city remains, though you get permanent traffic issues. If the jobs can go away (automaking for Detroit), the city hollows out and the jobs eventually go elsewhere. If you start with a new city built for cars, you end up with the sprawl of Los Angeles, balanced halfway between urban and suburban (and with it, permanent traffic problems).
As far as Los Altos Hills goes, it's the size of a large HOA development. At 10 square miles, yes, you don't need commercial development, because there's only 10,000 residents, all within 4 miles of stores. More importantly to my point, the children that live in Los Altos Hills almost certainly don't go to school with the children of the minimum wage workers that work in those stores. Likewise, the rich that live in American cities almost certainly don't send their kids to the same schools that the bottom half of the population attends.
Yeah, reducing households from 2 cars to 1, and replacing some number of car trips with other methods, can still greatly reduce costs. In terms of space (no need for 2 car garages and high parking minimums everywhere), in terms of maintenance (less wear and tear on roads, and fewer lanes), in terms of individual family finances (cars are expensive).
People lived in rural places before cars. How do we think they did it?
In rural areas, "before cars" is basically like the Amish do it. A town might be a built around a train station. In many places in the US, the old train stations were repurposed long ago.
But there's no reason to look that far back. There are more paved roads than there used to be. Cars had fewer features and went slower (but were more dangerous). Consider the original Volkswagen beetle, which was manufactured in Mexico a long time after they stopped making them here. And without highways, people with cars didn't travel as far.
Anyway, I think it's better to think about rural, suburban, and urban areas separately.
That's like saying 'people lived before electricity'; we know how they did it, they had a much lower quality of life. You can tell all the middle class families in suburbia that fresh vegetables are for the rich only (look up urban food deserts) but they're not going to like it because they're losing a lot, which may be money, freedom, or time.
I think because people who like cities (or just don't like cars) don't understand why the people that live in suburbs like suburbs, attempts to fix the situation don't work. I live in an area with one car garages and every available parking space is full, and not necessarily because they have two cars; half of them are using their garages for storage. We've got mixed use zoning with ground level parts set aside for shops, and half the spaces are empty probably because it's hard to run an economically viable business at the small scale, mostly because of labor; the ones that can stay in business are either high margin professional services such as tax preparation, or niche family businesses like restaurants and ethnic bodegas (whose customers come by car; the people that live above them generally don't shop there).
"They had a lower quality of life" isn't really an answer. A lot of things are cast as being *impossible* to do without cars, without knowing what actually happened. "It was impossible to read at night before electricity" is very different from "electricity is much better than candles." The difference is that unlike with electricity, there actually are very good alternatives to cars for a lot of use cases. No developed city has given up electricity, but many desirable places have walking, cycling, and transit alongside cars.
I have looked up urban food deserts, and last I time I did, I actually found that empirical research did not support the idea that they're very prevalent or a barrier to buying food. To the extent they do exist, it's because of things like high poverty in certain areas and the fact that cars are expensive and transit wasn't good enough. Lots of people live in cities and have no problem buying food; there are grocery stories all over the place.
Whether or not one thinks bike paths and small apartment blocks are "hideous" but massive parking lots and traffic in downtown are not, it is a simple fact that extremely spread-out infrastructure is much more expensive per person and if your town isn't super-rich, you're going to have to make tradeoffs.
Disagree. Cities are way more expensive than suburbs. Dense housing is always more expensive than low density -- the building standards are much higher, the construction more complex, the cost of construction much higher in an urban setting, and the cost of bringing in the food and water and shipping out the shit and trash much greater. People move to the burbs because it's freaking cheaper, even accounting for the increased time you have to spend in a car. People move to the city because they need the excellent economic connections, or love the nightlife, and not because it's cheaper.
I must say I'm confused. I constantly hear how car-dependent suburbs are what the rich want, and only Europoors who can't afford a car are stuck in the crappy city. But now you're saying the suburbs are cheaper and people live in cities by swallowing the additional cost of living. Which is it? If it's actually cheaper to live in the suburbs, why don't more poor people move into them?
In any event, you're comparing the cost of housing rather than the cost of infrastructure, which is what I explicitly was referring to. Building the same road, but longer/wider, obviously is more expensive, and is not impacted by "higher building standards."
But also, "dense housing is always more expensive" is just wrong. The cost of building vs land results in different optimal points, depending on factors like land price, see e.g. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3674181
And of course, you are ignoring endogeneity. Yes, more dense housing gets built in places where housing is expensive. Causality runs like desirable -> expensive -> dense housing. If you built a single-family home in downtown Manhattan, would it be cheap? Of course not; within a city, single family homes are almost always more expensive than, say, condos (e.g. https://wowa.ca/toronto-housing-market or the data cited in https://youtu.be/uJ1ePlln6VE?t=602). If you knocked down towers to build single family homes, would the price of housing go down? Obviously not (unless you built enough to destroy all of the things that make it a city people want to go to in the first place). And, in a similar vein, if you took a suburb and made it slightly denser, housing will be cheaper than it otherwise would be.
Actually, both the rich and poor end up in cities, but for different reasons. The rich need the connections and don't give a crap about the cost -- they can afford the $10 million pied-a-terre in a skyscraper with a great view, and to pay the valet or take a helicopter to the airport. The poor don't always end up in cities -- the banlieues of Paris or a certain chunk of California's Central Valley are excellent counter-examples -- but they are definitely there in large numbers, because the city-country wage differential rises faster at the low end than the city-country cost of living, perhaps because of the presence of the rich (and rich companies), who don't mind paying a premium for the menials they hire to service them.
The cost of infrastructure in the suburbs is completely dominated the cost of housing. Building roads is cheap by comparison. To replace the 40 houses on my ~1/4 mile block would cost ~$10 million, while building 1/4 mile of asphalt two-lane would cost maybe $500k. And I'm not considering the cost of land at all. (If I did the price to build the houses would triple.)
And I think you're wrong about the casuality arrow on the second one. Los Angeles didn't start out with skyscrapers, it started out as a village of grass huts. The location of cities makes them desirable, for any number of economic and geographic reasons, and then people move there, and the housing gets denser as a partial offset to getting more expensive (since denser housing reduces the demand pressure on price), and then people who lose the bidding wars start moving down (to much lower quality housing, i.e. the slums) or out, to the suburbs, where you can get the same quality for lower cost.
I didn't mean to characterize California that way, I mean to character "what people fleeing California are fleeing from." I admit to assuming that the people fleeing California with enough money to bid up the prices of rural houses insanely are much more likely to be denizens of Santa Clara County fed up with the density, longing for wide open spaces, and not rural Californian's who sold the raisin farm and are looking for another place just like home, except four states over.
Many things are expensive; that's the point of being rich to pay for them. The US is the richest society in history, so simply claiming that "spread-out infrastructure is very expensive" does not strike me as good argument.
Both medicine and tertiary education are also very expensive, and the US as decided to spend obscene amount of money on both of them to questionable benefit – at least with spread-out infrastructure you know what you get, and what you get is something a lot of people want.
Would you trade the last year of your life (the one spent on life support or dialysis or sleeping 20 hrs a day in an assisted living facility) for a nice suburban house? Hell yes, I would; and I'm hardly alone.
We certainly waste a lot money on stupid health care costs, but it's still the case that the amount of money required is out of reach of below-average or even average income towns. Look at the some of the numbers cited in https://www.youtube.com/watch?v=XfQUOHlAocY: The US is wealthy but nowhere near infinitely so, and it shows.
Moreover, housing specifically is very artificially supply-limited. Making more money available for it would just increase the price.
Low-density development isn't the result of people spending money to reduce density. It's the result of rules restricting development. There's no "trade" happening. Not even an exception for retirement homes, which don't strain the schools or cause crime. https://betonit.substack.com/p/no-retirees-in-my-backyard
The folk economics of housing is just not rational:
I'm doubtful that increased density (to the point where cars and school busses aren't needed) would reduce expenses in education. I don't think Strong Towns blogs about this? Rural schools often have less funding, though.
I don't think they talk about the cost of schools, no. There are some economies of scale which are best achieved with at least a medium size town, or several smaller ones working together. But I don't think schools have an extremely high floor in terms of cost, the way that other infrastructure does when you try to make it work for a very spread-out area.
I could see how it might help, by pushing the growth toward the big town districts and away from the little rural districts. The districts needing to expand their buildings and staff would be fewer, and they would be the ones with the best resources to do so.
With that being said, people with children often end up in the less dense areas due to bigger houses and more space for kids to run around, so maybe it wouldn't help at all.
And I feel like any changes to schools would happen years or decades after the initial changes to density. I also feel like it's much harder to predict than "higher density=cheaper housing". So I think school problems should be left out of the equation on this one, since they aren't directly related enough to be predictable.
The area started out as rural/farming, and has become more suburban in parts over time.
I think people that live in the residential areas in the two biggest towns could do some walking or biking. It wouldn't be convenient, and in some cases it would be dangerous, but I don't think it's the suburban housing developments that are spread out, for the most part. (I can think of a few exceptions that are along a highway between towns.)
The problem is the outlying little towns, the rural people on multi-acre tracts of land, and the farms. It would take many of these people hours to get to the nearest Costco, Target, or Home Depot by bike. Our significant winter snow and ice isn't conducive to biking, either.
You couldn't condense the area without abolishing about seven little tourist/residential towns and villages that are miles away from the nearest area with big box stores, a hospital, and other resources that people need. Even then, there would be the people living on the outskirts of civilization, on acreages in the woods, some of them farming a little or a lot. I don't think it would be practical to have multiple 20+ mile tram lines running back and forth between the main town and the smaller areas. Maybe more buses (Our bus system exists but isn't great) but probably not trams.
I agree with you regarding getting rid of single-family zoning. We desperately need more housing. But the structure of the area, not to mention the weather about a quarter of the year demands cars.
If walking or biking is dangerous, it sounds like a limitation of infrastructure that would be improved. And walking and cycling paths are cheaper to maintain because A) they carry more people in less space, and B) vehicles are heavy and cause more wear and tear. Same with winter cycling--I don't think there's actually that big of a correlation between temperature and biking.
Are the "small outlying towns" separate entities, or clusters of development within the same town? It should be possible to have small towns with a reasonable number of stores reasonably close. As one of those videos indicates, big box stores aren't particularly efficient for the city financing them.
I'm not sure how small these "outlying towns" are, but living spread out in remote places can be expensive, or require giving up certain benefits. It's simply not feasible to provide wide paved roads, central water and sewage, garbage collection, advanced medical care, etc. in a way that is convenient to every tiny cluster of people. They might require cars (although it sounds like the population should be small enough for small roads, or even dirt or gravel for the more remote and rural areas), but at some point people have to make tradeoffs. Like having a septic tank instead of a water treatment plant (this may already be the case).
If the area is growing steadily, then it's probably not going to be all rural forever, and it would make sense to consider planning for a future other than "more sprawl forever." It sounds like this has already started happening ("has become more suburban"). Suburban doesn't have to mean "car dependent sprawl."
Without knowing more details I don't know how feasible trams vs bus vs no transit would be, but whatever clusters of development (housing, retail, office, etc.) there are can at least be walkable once you arrive, which again greatly reduces maintenance costs.
My area is probably on the right hand side of the "winter severity" bell curve. We have long winters, sometimes from November till April, with lots of snow and ice. Often there is significant snow or ice on the road for weeks at a time. It's the kind of area you need a four-wheel drive and/or snow tires to be really safe driving in during much of the winter. Sometimes the weather will drop below zero Fahrenheit for weeks at a time. Other times, intense wind and snow will cause difficult travel conditions for several hours. It gets dark pretty early, too. I wouldn't want to be at work, have a major wind/snowstorm hit, and be stuck at work because I decided to bike that day. I am aware that bikes equipped for snow exist, and I am definitely aware that proper winter clothing exists, but I think, of the people who might choose to bike, only about 5% would be willing to do so in the kind of severe weather we get regularly during the wintertime. I know I wouldn't be willing to do so--I'd rather walk, to be honest, because I think I would be less likely to fall. I am not an experienced cyclist and feel my feet are more trustworthy than something I need to balance on two wheels. Maybe a trike or four-wheeled cycle would be more stable or appropriate.
Regarding small, outlying towns: Think clusters of houses and touristy shops along a state or federal highway. The individual towns themselves are actually pretty walkable, but the people there need to go to the big town at least weekly due to lack of reasonably priced grocery stores. Many of them commute to work in the big town daily. So even if the big town were to become more walkable, a significant portion of traffic would remain from the people commuting in. I think some of the outlying towns have city water systems, but many people in the rural areas have septics and wells. It's actually the big town experiencing difficulties with its water infrastructure at the moment.
I am with you that the big town probably needs to become more dense and walkable, it would be great if we stopped using up all the land along the highways for houses, and we could use a boost to our public transit. (There's a similar area nearby that does buses pretty well, and I think we could really use that). However, because of the large number of people that need to drive in from outlying areas, the significant winter weather, and the large volume of tourists who drive, the big town also really needs to expand its capacity for vehicles.
There's famously a town in Finland that has quite a lot of winter biking (https://www.youtube.com/watch?v=Uhx-26GfCBU) but I understand if that's not a priority. I still am not sure if there's really a correlation--lots of places high in the Colorado mountains get snow from September through May and have extensive bike paths--but they also tend to have excess money. Even if they're only used part of the year, road damage is largely because of use rather than weather, so it still might save enough money on road maintenance to be worthwhile. But I'm definitely not sure.
If enough people are commuting into the big town, then a commuter tram line or something might make sense. Not sure what sort of numbers you're talking about. If people do have to drive in, then the city itself can remain walkable by having cars parked on the outside or underground, as described e.g. in https://www.youtube.com/watch?v=ZORzsubQA_M&ab_channel=Vox
Without more details, no one can figure out why the big town is having issues with its water system. Did the population growth just outstrip capacity?
On ice, it's a lot easier to fall over when walking than biking - the bike is self-stabilizing and nothing is pulling you sideways, while walking is a process of falling forward and catching yourself with a leg in front where lots of things can go wrong. The exception is making tight turns, of course.
I agree with the rest of your argument, though. In particular, biking over snow is a huge pain in the ass without fat tires.
you have a ton of growth and a giant new tax base to make the town better. Take it from someone who has spent time in areas that are literally dying, with abandoned homes and a shrinking tax base, these are good problems to have!
Yeah, up until 2020 I responded to complaints about the area growing with "Well, at least it's not economically depressed." The wildness that happened during the pandemic changed my attitude somewhat, but you're right. Thank you for the perspective.
This sounds like you could be describing almost any small to midsize growing municipality in the USA.
I say this a little in jest, but really: I’ve moved between several of these areas, and at this point I can predict the mood of the populace, government, and immigrants by heart. Same complaint about vacation rentals, even as they dig their heels in against new multi family/dense housing, same urge to hike property tax rates despite little evidence that current revenues are being spent productively or that there is a budget shortfall (versus an inability to see the results of spent money fast enough to satiate the spenders).
I would urge you to be skeptical. People move because the area works. No town ever became a hellhole because a bunch of people decided they loved it there. Lots of folks like to move into a desirable area and then pull up the drawbridge, and trumpet loudly the whole time that they’re protecting the place.
On the other hand, an area can become a hellhole for one group of people when another group of people move in (e.g. rich people / poor people, in eithe direction).
People tend to instinctively fear change, so most panics are overblown, but just because the people who move in like the place doesn't necessarily mean they can't have significant negative impact on people who already live there and have different tastes or needs or means.
Thank you, this is encouraging! I probably need to stay off the stupid local facebook groups, where the "pull up the drawbridge" call is pretty loud, and trust the market to fix the housing problem. Eventually.
Anyway, you described the attitudes of the various groups here perfectly. Glad to hear that it's tale as old as time and might be less indicative of real, permanent problems than just growing pains that will probably settle out.
I have lived all my life in a small town in California, and this happened here about 30 years ago. It was partly precipitated by the state expanding a nearby prison, providing lots of jobs and reasons for people to move here, and partly by our proximity to Los Angeles (within distant commuting distance).
I don't have any simple solutions to offer, because of course there aren't any. But people who look dismissively on California should awaken to the fact that many of California's policies are the result of attempts to deal with some of the issues you raise. We have 40 million people. Many of the low-key, lackadaisical approaches that rural states use aren't enough here.
I remember being on a trip to Montana not long after California introduced vapor recovery hoses on gas pump nozzles. They were kind of unwieldy and an inconvenience, and some people at a gas station in Montana were scoffing about them. As well they could, in a huge state with an insignificant population. But California didn't introduce vapor recovery legislation just to annoy residents -- officials were trying to deal with very real air quality problems. Duh.
And if you have overcrowded schools in need of new facilities and infrastructure, then taxes gotta go up, or ya gotta pass a school bond to raise money, or something different.
As derided and mocked as much legislation is, the vast majority of it arose in an attempt to solve a problem. California law requires swimming pools to have a fence and gate because far too many innocent children were drowning. It wasn't just legislators trying to inconvenience pool owners.
Good luck in your changing rural community, and remember: many laws originate to deal with real problems, and growing populations DO cause problems.
Yes, exactly. We bought property in part because the roads were not paved, and we intended to ride our horses from our place to a nearby established network of horse trails. The summer after we moved in they paved the road for gravel haulers for a big construction project. That’s over now, but there is a tempting straight stretch of over a quarter mile right in front of our driveway that a variety of people use to test their tires, their engines and their brakes. I took a young green draft horse out for exposure to traffic. An idiot in a large pickup came around the corner, and accelerated towards us, I guess hoping for a show. This sensible mare did what she had been trained to do in an unexpected situation, she froze. The pickup slammed on his brakes and skidded around us, cursed us roundly, but drove on. We don’t take our horses on the road any more. The last road millage failed, the pavement is deteriorating steadily, but the motorcycles keep roaring by. Ugh.
Yeah, I realize that part of the "Live and let live" idea only works if you live where somebody's action isn't likely to directly affect another person because of close proximity. My area is likely to swing left, and I realize that is partly for a good reason: More management is needed.
With that being said, I think there's a way to get things done without passing laws. Good incentives and social pressure can encourage people to make the right decision without actually being compelled to do so, and I think the right thing to do most of the time is to suggest rather than order. What I don't want to see is a bunch of permanent laws passed that solve a bunch of problems that the area has at the moment and end up causing over-regulation in the long term.
My local school, which I attended, has been in a fast-growing neighborhood. They passed a bond about 6 years ago, built a huge new wing onto the school, and suddenly they're full again. They tried passing a bond again, but in an area where property taxes are already rising, nobody was surprised when it failed. The people in the area need a school, but they also need not to be paying crazy taxes. I suspect this will end up with band-aid solutions like temporary classrooms that will be worse for everybody: The taxpayers will pay more in the end once something eventually does get passed, and kids will end up in subpar temporary buildings (our weather is probably not great for any kind of temporary infrastructure.)
Thanks for your thoughts, and the reminder that places don't stay the same forever and that regulation has a proper place and purpose.
What sorts of incentives and social pressure do you apply to the people who ban “To Kill a Mockingbird” from the school library? I am very interested in any suggestions.
(My username is a TKAM reference, BTW. I am not unusually fond of pickles. Well, I am unusually fond of pickles, but not enough to name myself after them on the internet.)
Book banning is an interesting one to me. I am conservative(ish), and I don't know a single person in my circles who would object to something like "To Kill a Mockingbird", 1984, or "Huckleberry Finn." Maybe they would argue that elementary aged children shouldn't read these books, but all three were part of my high school curriculum, along with "Lord of the Flies" and "The Kite Runner", both of which have some sexual content. Nobody complained. The class was given the option to do an alternate assignment to "The Kite Runner" and as far as I know, nobody did.
I think it's a valid argument that young children shouldn't read certain books. "To Kill a Mockingbird" was on the grown-up books shelf when I was young, and I heard that it was good and asked to read it when I was about ten. I was told no, to wait, and I did wait, and I think that was good because I was much more ready to process a story about a false rape allegation, horrific racism, hypocrisy, poverty, and domestic abuse when I was fifteen than when I was ten or eleven.
So with all that being said, I would do two things if I encountered somebody campaigning to remove TKAM or similar from a school library or English class curriculum.
1. Talk to them and explain the value I believe the book has, and why removing books from libraries is not a good thing to do on principle. The social pressure can come in the form of me, as somebody in their social/political circle, disagreeing with them openly and strongly.
2. Argue that maybe instead of removing the book from the curriculum entirely, it can be moved up a few grades. This is probably especially helpful for books that are being challenged for sexual content, like 1984.
If you are currently working to keep TKAM in a library, but you aren't in a position to do option one, maybe look for the conservatives in your area who aren't for banning books, and ask them to do option 1 for you? I know it's hard to find the reasonable people when there's a sect of squeaky wheels being very noisy about why TKAM should be banned, but I'm pretty sure that's a pretty fringe position in most parts of the US, and most of your local conservatives, and all your local libertarians, are just as annoyed as you are at the squeaky wheel people. I know I would be.
Are tax rates linked to income rates? Changes in average valuation of houses? Do tax rates have a built in fudge factor for the fact that it will cost fat more today to repair the main road and pave that alternate dirt road the now has more traffic on it than the main road, than it cost to pave that main road originally way back whenever? Small rural infrastructure gets overwhelmed these days because no one remembers when it wasn’t there at all, so they take it for granted. This requires lots of communication on the part of local officials that no one wants to hear, especially if the people who should be communicating this information don’t acknowledge the truth of this.
There is no sales tax, so the local taxes are the state income tax and the pretty significant property taxes which are based on house prices. (Whether that's your individual house price or the average in your neighborhood, I am unsure--not a homeowner!)
It's funny you mention the main road and alternate dirt roads. I can think of a couple pairs of of roads like this. Not quite the situation you describe, but people are finding creative shortcuts through residential areas or farm fields to get where they need to go and avoid traffic.
That's a good point about the people who actually need to be doing the communicating. I feel like the city and county law enforcement are the ones who are most vocal about the strain on their resources, while some of the rest of the leadership are pretty quiet. The road complaints are coming from citizens, including people who have to drive in the most congested part of town, and then people who live along the shortcut roads and are trying to raise kids who want to ride bikes and stuff along an increasingly high-traffic, high-speed street.
Yup, we already have some fun culture war stuff going on, and have since the early Trump years. A family member and I passed a picturesque downtown park, and my family member asked "Oh, what do they use that park for?"
"Protests, mostly."
(Left wing, right wing, occasionally devolving into minor violence, like some random person pepper spraying protesters. I find most of the protests themselves to be pretty meaningless, but the incidents and negative dialog that arise from them are making me really concerned.)
I never made the connection that the people moving here would be more conservative in the pro-policing sense, but that's probably true. Thanks for your thoughts.
Reminds me of the park here. Back when destroying statues was the hip way to protest racism, the local BLM affiliate pulled down and destroyed a bronze statue of a civil war soldier. Of course, this being north of the Mason-Dixon line, the statue was for and of a volunteer regiment of boys in blue, but those protestors at least demonstrated how much they hated racism.
Yeah, the people moving here are the fancy houses sort of people. Some of the fancy houses are clustered in resort areas, which might cause some problems for those of us who don't live in resorts, if police start focusing on those areas. But I think you make a good point. Probably underfunding, and not public opinion, is the biggest threat to good local law enforcement here.
For the most part, it's pretty much just normal protests. The local branch of the Women's march every year. Some protests after George Floyd was killed. Some people protesting abortion by hanging up baby clothes. Some people protesting Dobbs by dressing up as handmaids from "The Handmaid's tale". (The pepper spray incident was against those protesters.) I don't know for sure, but I think the local high schoolers tend to be a good percentage of the left-wing protests.
Our area has historically had some weirdos of the far-right variety hanging around, and occasionally something crazy they do sparks protests and controversy. I am against far-right weirdos and believe their philosophy is awful and dangerous, but I honestly think the best way to handle them is to not feed the trolls. They just need to be ignored and/or laughed at until they realize that the far right weirdness is stupid.
And handling your own keys is simply unrealistic behavior for a huge, huge % of humanity. Hence why some of us have been shouting 'hey this whole crypto thing is never going to work out' for years now
Dude. The bell curve has two sides. Half of humanity has an IQ under 100.
And even among the computer literate, people are busy. I'm sure I could figure out how to manage my own crypto <mumbles> <mumbles> thing, but I have other shit to do. When the hype got really high I stuck a few hundred in Coinbase. Which turned out to be a mistake from an investing point of view, but they haven't ripped me off yet.
Most of the population will never be capable of handling their own keys.
How many kids do you have, oh so well adapted mmirate? or do you measure adaptation in imaginary economic units? Perhaps there is a cryptographic sequence out there that calls you father? Inheritor of the earth no doubt.
I don't know how to hold or own crypto. I'm sure I could learn it. But I have enough stuff on my plate. I've already got a few bank and financial accounts to keep track of, I've got medical visits I need to stay on top of, I have government forms and taxes that need to be filled out promptly and paid, and a bunch of random tech accounts I need to manage for my job. Just making that list of things feels minorly stressful cuz I'm probably forgetting other important stuff.
Transaction costs in life add up. Especially when those costs are my time and level of frustration. If someone earns ~$100/hr then a 1 hour setup is 900 dollars cheaper than a ten hour setup.
Everybody here seems to be casually talking about FTX being a big fraud. My impressions (mostly based on reading Matt Levine) were more focused on extremely poor risk management rather than explicit fraud.
Does anybody have a good explanation of the fraud angle, as opposed to the "accepted your own stock as collateral against loans" angle?
I believe loaning out the deposits was against the terms of service so that is what could be called fraud.
Also SBF is reported to have had a back door that allowed him to change accounting numbers with no audit trail. Highly unlike he wasn't using this for fraud!
It is unclear if it was always a fraud. Many things point to "probably", but there's no smoking gun yet. It is 100% clear that SBF committed crimes sometime during the summer and has been lying since then.
I think it was unlikely to be an honest mistake even when the last Matt Levine article came out, and things have evolved since then. I heard rumors that someone found a backdoor they used to transfer the funds without risk management people noticing. Also, the official company account tweeted that they were allowing withdrawals from the Bahamas only in order to comply with a request from Bahamanian regulators, and then the Bahamian regulator tweeted that there was no such regulation and they had never asked FTX to do this - I think this might have been a ruse to let insiders withdraw first without provoking suspicion.
A lot of this is still rumors but I would really really like to believe there was nothing outright illegal here and I still think there's a <5% chance that's true.
"Does anybody have a good explanation of the fraud angle, as opposed to the "accepted your own stock as collateral against loans" angle?"
Maybe it didn't start out as fraud, but it sure looks like it ended as one.
Funds to the tune of USD 600M were drained from users on day of filing bankruptcy, claimed a "hack" by FTX. Very convenient timing on that one, most likely an inside job. Facilitated through an app update, which is decidedly not trivial to do from the outside, let alone on short notice.
Yesterday, SBF and FTX staff detained in the Bahamas while trying to get to Dubai, possibly under the assumption of a lack of extradition treaties to the US.
Matt makes the distinction between illiquidity and insolvency. The fraud was that they told everyone they had 1:1 deposits, full liquidity. In reality, they took that cash and loaned it to Alameda to cover for its bad bets. They were clearly hoping that they could make new bets with this cash, make the money back, and pay back into the depositors accounts. But... it blew up.
They clearly weren't supposed to loan out depositors cash. They didn't own it, they were just supposed to be holding it.
So my understanding of Matt's explanation is that it was just obviously not the case that they were holding assets 1:1 in the strictest sense and nobody who knows how a future works could have thought that they were.
A big part of their business was offering leverage and futures and things that inherently involve borrowing and lending assets.
I guess the main fraudulent claim was that they weren't using the funds for any lending outside of the sort that is inherently involved in offering futures or margin loans?
There is a big difference between making a market in something and taking a position. Market makers will have some exposure, but the point is to get the customers to hold the risk. At least in theory, the market should be able to go anywhere without dragging the market maker with it. Maybe this market is too volatile, but more likely someone goofed.
Yep, doing risky things with your customers money to try to pay off your own debts, and telling yourself that you will have enough later to pay back your customers is fraud based on self- delusion. Or just fraud. Geez. Large amounts of money makes people do crazy, stupid and illegal things.
I'm sympathetic. You put a lot of your energy into this effective altruism thing, and now some significant part of it has been proven fraudulent. Not nearly all or even most, but a big and visible part, and that will no doubt give the NYT/New Yorker crowd another excuse to go after EA and rationalists and more ammo to make you look bad.
Emotionally, I can only say: this is a HUGE problem in charity, to the point there are multiple organizations giving ratings to various charities. So whatever they try to say, this happens a lot in the charitable world. Consider the base rate, and cut yourself some slack.
Are we overcorrecting for the FTX situation? I admit my belief in seemingly-ethical founders is shaken. But at the end of the day this is an N of one. I’m really not sure how to update my beliefs.
Yeah. I didn't change my opinion on Jews because of the Madoff thing or the viability of natural gas because of Enron. Not sure why FTX should change my view of EA or crypto (which was already low).
Yes, everyone always overupdates on every big news story. But it would be offensive to mention that now so the only option is to wait until the panic dies down and then double-check all the updates we made.
I understand the reasoning, which both you and Eliezer have argued, that says that if traditional finance types couldn't predict FTX's downfall, who are we to say that we should have been able to? There still seems to be something missing there. Consider the following:
> What Sequoia was reacting to was the scale of SBF’s vision. It wasn’t a story about how we might use fintech in the future, or crypto, or a new kind of bank. It was a vision about the future of money itself—with a total addressable market of every person on the entire planet.
> “I sit ten feet from him, and I walked over, thinking, Oh, shit, that was really good,” remembers Arora. “And it turns out that that fucker was playing League of Legends through the entire meeting.”
This, um, does not strike me as the sort of reasoning that rationalists would endorse, at the very least.
2) Let's carry your hypothetical forward and say that some traditional finance people had seen something awry at FTX. What would we expect them to have done to capitalize on that? This is related to the a classic asymmetry in that you can't short a private company's stock, so investors can go bananas with too-high valuations and no one can do anything about it. A skeptic also couldn't just, say, invest in Binance instead, since they aren't pure competitors; as CZ acknowledged, FTX's downfall hurts the whole industry. Maybe there's some other complicated maneuver you could have performed -- I'm no crypto expert -- but it seems like the sort of thing where the best move is not to play.
3) If we were actually "trusting the experts", why didn't anyone explicitly say so? I don't remember any pieces where someone actually grappled with the volatility of crypto, and made the case to the EA community that "FTX is different, you can trust them" or "I was skeptical until I saw that so and so investors had done some due diligence, and nothing they invest in ever collapses." (That sentence sounding preposterous is my previous point.)
4) There's a much more obvious explanation that we really need to wrestle with and which Scott's post inadvertently admits: We blindly trusted Alameda/FTX because we liked the people there. They speak our language, they come out of our own community. These relationships blinded us to the possibility of the disaster that's unfolding now.
1. Disagree re: Sequoia. Their reasoning was cringe, but in fact SBF and co created a $30 billion company. My impression is most of that was real work and then the fraud started around 6/2022. So he's either someone who's really good at making money and ethical, or really good at making money but unethical. I don't know why him playing League of Legends should shift me into the second category.
2. If it were me I would have shorted Solana, although I'm not an actual trader and I don't know if there would have been timing issues. Certainly they could have shorted Solana once CZ dumped FTT, or when the crisis seemed to be sort of starting but nobody was sure yet.
3. This is a weird argument. If tomorrow it turns out that Neptune doesn't exist, is it fair for you to say you were just trusting the astronomers? Then why didn't you say "I was just trusting astronomers". When you do consensus things that everyone agrees are fine, you don't need to lampshade that you're doing that.
I think people are getting screwed up here because there are two slightly different questions. First is "was it bad to deal with crypto people given that crypto has some well-known inherent volatility and scamminess and ethical issues?" The second is "was it bad to deal with FTX given that they were a giant scam of the sort nobody predicted beforehand". I think the first is a reasonable debate and the second is "nobody predicted it so we didn't either". But I think if this debate happens now those two questions will inevitably get really confused.
4. I think if you had asked me a month ago, I would have been indifferent between trusting FTX, trusting Binance, trusting Coinbase, trusting Facebook, and trusting Wal-Mart, all for mostly the same reasons (they are big companies that the market seems to like). Maybe FTX would get a small bonus for having one person I knew + apparent EA values, and maybe a small malus for being in crypto, but overall I think it would just be my big-respectable-company prior.
1) I don't think the argument is about his playing LoL, but about the fact that the investors portrayed his playing LoL during a meeting as a positive. The reasoning presented is not epistemically sound. This might not mean much, maybe puff pieces like that are the norm and the discussion of finances are left for dry reports, but the argument presented was not that his playing LoL should shift our view. If I were to make such an argument, it would not be based on the fact that he plays LoL but on the fact that he does so during investor meetings.
4) I think it's obvious that we should trust Wal-Mart more than the others (for relevant values of trust). Wal-Mart is a mature and stable business, the others are less proven and more volatile. Wal-Mart is publicly traded, which as I understand requires more disclosure. I don't see the argument for treating FTX as being in the same class as Wal-Mart when it comes to using the market to justify trust.
Wait, you're including *Binance* in the list of trustworthy companies? Binance of the "Bond villain compliance strategy" (e.g. https://twitter.com/patio11/status/1411869320917884932)? That should at least make you cautious, Binance isn't Sci-Hub.
Putting Coinbase and Wal-Mart in the same category is already optimistic, the time-to-failure of crypto exchanges and of supermarkets is rather different!
> My impression is most of that was real work and then the fraud started around 6/2022
I don’t really know anything about this situation specifically but in the aggregate, it sure looks a lot like a classic margin call scenario to me. You have a highly leveraged position based on assets that suddenly decline in value leaving you exposed and you borrow assets from somewhere else to patch it up because you’re sure that the assets will recover and you’ll be able to make it all nice again. and then it doesn’t happen that way..
I have a feeling it’s going to turn out to be more hubris than Fraud.
Re 2: Sure you could have shorted either Solana or FTT. But it's rarely a good idea to short a company just because you know something is off. Especially with assets as volatile as hyped up crypto tokens. Even if you know for sure that the company is going to 0 within 10 years (and the conditional likelihood of this given something being off may not be super-high). The reason is that the token may pump a lot before that happen, like 5-10x or more. And unless you have a lot of capital to spare to avoid liquidations, you may face much greater losses than you hope to gain from your short.
This is a big problem when shorting volatile assets over a long time horizon. But it can also get bad even in short time-horizons. Eg personally I experienced this during the Terra/Luna collapse. I made good money shorting their stablecoin Terra. But ended up getting short-squeezed on Luna when, right in the middle of the collapse, the Luna price suddenly increased 500%+ in less than an hour.
Sequoia has very different incentives. If they assess FTX as having 99% chance of imploding into a black hole of fraud and swallowing all their money, and 1% chance of growing 200-fold, that's an okay bet for them. They probably have appropriate techniques for segregating themselves from criminal investigations into their fundees; they make many different investments so they are fine with most of them failing as long as on average they are profitable. If the cornerstone of your philantopic ecosystem has 99% chance of collapsing, that's not great, though. Both because there is no large pool of other bets to make the actual outcome converge to the expected value, and because a collapse is not zero-value (like it probably more or less is for Sequoia) but can be significantly negative; there is PR damage, loss of trust, emotional damage, talented and motivated people ending up in debt etc. So freeriding on a VC firm's evaluation doesn't really make sense.
I think the crypto aspect of this is a much bigger deal.
1. That $30 billion valuation you cite was entirely based on the speculative prices of FTX's crypto assets. I think there are many reasons to argue that it "wasn't real" in the same sense that Ponzi scheme valuations aren't "real" value, and that we could have suspected ahead of time just by it being crypto. You can't say the same about Facebook or Wal-Mart.
2. I'm not talking about anything anyone could have done once CZ dumped FTT. I'm talking about things finance people could have done PRIOR to EA's gatekeepers like yourself essentially saying "yes, it seems like a good idea to fund a bunch of new charities this way".
3. There's not a consensus of any sort that crypto is a reliable business of the sort that you want to build a large number of charitable organizations on top of. Tons of finance-savvy people think crypto is a mostly a scam; no astronomy-savvy people thinks Neptune is.
I agree there are two slightly different questions I'm slightly motte-and-bailey'ing: "Is crypto too volatile to build an entire apparatus of charity organizations on?" and "Did SBF/FTX do something unethical on top of being in a generally risky business?" But they're also linked: By Eliezer's current guess, the general crypto downturn of mid-2022 caused a bunch of trouble for SBF/FTX that led them to take riskier/unethical actions rather than eat the losses.
4. I agree with this characterization of the prior calculus, I just think the malus for being in crypto should have been large, especially in the uncertainty. And yeah, that'd count against Binance and Coinbase too. But my big point is that given that elevated uncertainty, additional investigative information had a lot of value to EAs considering building a bunch of new charities out of SBF's money.
a There might've been some pro-SBF bias specifically.
b There likely was pro-crypto bias of significant chunks of the community in general.
c There likely was a recency-bias re no hot $30-billion valuation from top VCs decacorns spectacularly blowing up recently, making charities and apparently Scott underestimate the distinction between stable old public companies and volatile private companies in a risky sector.
I'm pretty sure a) didn't matter that much, EA was (unfortunately) taking money from a bunch of other crypto guys.
b) and c) likely did.
But that aside, I don't see the argument against EA piggybacking off Sequoia assessment of FTX, given it certainly had better expertise and better information.
Without knowing the extent to which EA orgs made their futures highly dependent on FTX money (do we know?), it's hard for us to judge whether their course of action was appropriate given the general riskiness of VC valuations, in crypto in particular, and after the crash of this year in particular, or not.
It's fair to ask if EA orgs treated crypto in their risk assessment - it's not fair to expect them to have taken a stand on crypto in a way society in general/many other orgs didn't.
I don't crypto - was it actually possible to short FTX? That is, if traditional finance types started to investigate FTX, could they have made money by alerting the world to the fraud?
That's pretty much what the CEO of rival exchange Binance did. He saw some things in their financials that were sketchy and so he dumped his whole stash of FTX-associated assets.
What is that really a short, or was he just getting those assets off the books because he thought they stank? A short would imply he intended to buy them back after their collapse.
Can anyone recommend a good english-language history of the reign of terror following the French revolution? I want to learn more about the period than I got in high-school, but I don't have enough background in the area to know which are the reputable histories.
Simon Schama's "Citizens" is an excellent and thorough overview of the French Revolution as a whole, from its antecedents through to its longer-term sequelae.
If you prefer fiction, to give you a sense of how people felt and how they self-justified things, you might want to try "A Place of Greater Safety" by Hilary Mantel (the same woman who wrote the Wolf Hall trilogy).
'Revolutions' podcast (the same guy who did 'The History of Rome') has a French Revolution part, and you could either work your way through it all or find where your interests start.
If I'm understanding, it sounds like Scott here twice refutes the standard EA notion that there are no diminishing returns to money used for charity, in his lobbying example and his opening sentence to paragraph 2:
>The past year has been a terrible time to be a charitable funder, since FTX ate every opportunity so quickly that everyone else had trouble finding good not-yet-funded projects.
Does anyone still want to defend the notion that returns to money for charity are linear?
Yeah, in retrospect I elided things in that sentence - this is mostly true for EA style charities, and true most of all for AI alignment charities. There are only a few hundred people in the world qualified to do AI alignment, and no obvious quick ways to turn more money into more alignment except waiting for those people to build more infrastructure.
I think it's much less true for eg GiveDirectly, a group that gives money directly to poor Africans, of whom there are a pretty unlimited amount.
I think different parts of EA are thinking more about AI alignment vs. GiveDirectly and so have different opinions on this question. If you're just looking for any charity at all, then the no-diminishing-returns argument holds at least up to the number of poor Africans.
> and no obvious quick ways to turn more money into more alignment except waiting for those people to build more infrastructure.
Has anyone considered "buying out as many existing AI companies as possible"? If enough money could be gathered, it sure seems like an effective way to turn lots of money into "Fewer people are doing dangerous vs. 'safe' AI research".
Yudkowsky's favorite example of unsafe research is Facebook's lab. Gathering enough money to buy out Facebook-level corps is probably not near-term feasible.
While reading it I did find myself wondering if I was reading the open thread! Glad to hear ACX grants are unaffected.
Anyway, I have slowly been building a project for two years and I'm finally happy enough to put it here: www.i3italy.org . It is a community interest company whose main goal is to be a guidebook aimed at Italians that live in England, written in Italian. If that seems niche, it is - but these are 520,000 people, many vulnerable and hit by Brexit, then Covid-19, then Brexit again, and that is without considering the mess that Italian politics and (some) institutions have been in the past few years.
Somehow, my journey of self-discovery on how to use 10,000 out of my 80,000 hours ended up realising that this is the best way to maximise my free volunteering time impact on the world... taking into account my skillset, knowledge, free time and willingness to learn new skills (so this answer is probably very unique to me). Really hope I can bring it to its full potential: wish me luck, ACT/EA crowd!
Hi Cesare, just wanted to say... great work! It is not at all relevant to me but I browsed the site and have no doubt that it'll have a positive impact for Italians that are located in England. Keep pushing, keep improving, good luck!
I'm skeptical that these sorts of scandal markets will help much. As you said yourself:
> There’s a word for the activity of figuring out which financial entities are better or worse at their business than everyone else thinks, maximizing your exposure to the good ones, and minimizing your exposure to the bad ones. That word is “finance”. If you think you’re better at it than all the VCs, billionaires, and traders who trusted FTX - and better than all the competitors and hostile media outlets who tried to attack FTX on unrelated things while missing the actual disaster lurking below the surface - then please start a company, make $10 billion, and donate it to the victims of the last group of EAs who thought they were better at finance than everyone else in the world. Otherwise, please chill.
Manifold does a good job of aggregating public info, but I have yet to see a market that clearly figured out something important before the public did. Usually it's markets that jump around based on news articles or Tweets that some Manifolder found, not the other way around. For example, there *was* a market on FTX, and it got blindsided too:
Oh no, you're going to realize that I'm not actually writing blog posts, just putting a bunch of individually-meaningless characters together on a computer screen and hoping nobody notices!
I'm not sure how to interpret this reply, sorry. I can tell you're being sarcastic, I assume because you disagree, but I don't know what you're actually trying to convey, or why exactly you disagree.
I have to say I really don’t get prediction markets. meaning why they are any better than placing a bet on something with Ladbrokes or using the stock market if you want to play with money. I failed to see how any predictive value can come with such a market, because if it can be gamed by an individual for their own interest it seems to lose all utility as a predictor of anything.
I don't think this is about incentivizing investigative work. I think there's a question of "did Joe Average Charity Founder err in accepting FTX money because it should have been obvious to him that FTX was a fraud". If the "is FTX a fraud?" prediction market is at <1%, that shows it should not have been obvious to him. Maybe FTX was a fraud, but in the way where it was investigative reporters' responsibility to establish and not the way where ordinary people should have known it from already-available information.
I mean, if the market is tiny then there's also very little information there for Joe Average Charity Founder to use. The FTX bankruptcy market was at 4% for over a month with no activity. How is that supposed to have helped Joe?
I think it's proving the interesting and nontrivial statement "there isn't enough information to conclude or even inspire interest in a prediction market that FTX is fraudulent", which is the question I'm interested in. I bet that market rocketed way up as soon as information came in!
Indeed it did. But you phrased these prediction markets as "the solution" to questions like "How can I ever trust anybody again?” I suppose I don't entirely understand what you meant by that question, but I still fail to see how "well we set up a prediction market!" is supposed to solve anything in that vicinity.
To put it another way: Let's say we learned that there was fraud on the other side of EA. Suppose it turned out that AMF leadership was, say, distributing ineffective, much cheaper bednets and pocketing the difference in costs. Obviously we'd also feel betrayed. But our reaction wouldn't be, "Well, I checked a prediction market and no one else could see this coming, so I still feel good about my donation."
Instead, we would look to the investigators we actually have and ask how they could have gotten this wrong. We'd be combing over GiveWell's extensive, published work and asked how they missed this. We'd expect GiveWell itself to issue a comprehensive report and identify the bad actors at work. And even then, the science would probably still also be solid and with the right personnel changes, AMF could go back to distributing the actual bednets, limiting the damage.
In fact, it's because of this extensive public ledger of GiveWell research that I have difficulty imagining something analogously extreme and destabilizing occurring at AMF.
Obviously there's a lot more potential for scams and other big problems on the charity-receiving end of things than the charity-giving end, so that's why we have GiveWell but no analogous organization -- I guess you would call it ReceiveWell? -- for potential charity recipients to vet the sources of their funding.
But this whole experience shows that we really should take a long look at whether ReceiveWell would be worth setting up.
I see, so you're using prediction markets more as a form of social proof to say "nobody else saw this, so you shouldn't be upset at yourself either nor should anyone else be upset at you", rather than a way to actually predict people's behavior. I didn't understand that originally.
Right now Manifold doesn't allow anonymous bets. This means that if someone has insider info and wants to turn a profit correcting a market, they have to at least attach a pseudonym to it, which is a slight disincentive. I'm not sure how much of a disincentive it actually is considering the ease with which one can make a new Google account, but it's at least an inconvenience.
17 % on Ukrainian victory (up from 14 % on October 30)
I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24 without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24, or c) return to exact prewar status quo ante.
45 % on compromise solution that both sides might plausibly claim as a victory (unchanged).
38 % on Ukrainian defeat (down from 41 % on October 30).
I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.
Discussion:
Since previous update, we have two major developments, which, however, contradict each other with respect to the future of the war.
First, there is great, albeit not unexpected, Ukrainian victory at Kherson. I should note that in the comments to previous update, someone argued that, actually, Russian retreat east of the Dnieper would be bad for Ukrainians, in the long-term. I respectfully disagree.
Second, it looks like Democrats will lose their majority in the US House of Representatives. As of now, Republican victory is still not certain, but it is highly likely. Annoyingly, 538 froze their forecast just before the elections, but from the tone of their coverage I am guessing that now Republicans have better odds than before the elections, and then they gave them 84 % chance to take the House.
I don’t think that Republicans would simply block further aid to Ukraine; pro-Ukrainian bipartisan majority will be imho still pretty solid. Republican victory however makes less likely further substantial increases in the direct US aid to Ukraine and indirect aid in the form of sanctions (btw. I don’t buy “actually, sanctions help Putin”, argument). And, if Ukrainians won’t either moderate their war aims, or be faced with some major disaster on the battlefield or in the economic sphere, I think long-term political sustainability of current levels of Western aid to them is highly questionable.
*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of 2022, that is.
I want to get a sense of what are your assigned probabilities on condition of a) continued Western support; b) Russia not going for the nuclear option; c) continued Western support AND nuclear weapons not being used
Because in my mind, conditional on Western support and nuclear weapons not being used, only a few days into the invasion when Russian military dysfunction and Ukraine's capacity to fight and win pitched battles had become apparent, I had already made a significant update(*) towards some definition of Ukrainian victory, enough to make that the most likely outcome since Ukraine could already stop the Russian advance, and prolonged war would only give them an advance in force quality (just as we have seen). In other words, is the difference in our probabilities about military performance, or externalities?
*) Unfortunately I never wrote down explicit predictions pre-invasion so I don't know how big (my high confidence on invasion happening is well-documented however), trying to reconstruct my view based on what people I consider clear-thinking and who I would likely have listened to had written and said, I probably expected very costly and limited Russian victory and successful Ukrainian guerilla campaign making long-term occupation untenable
So, I am going to start with the easier one, nuclear weapons. My estimate is already based on an assumption that Russia will not use nuclear weapons. Probability of their use was imho always very low, and after the events of a last weeks it became negligible.
There are some scenarios when nuclear use would make sense for Russians, like if Ukrainians would be given powerful long-range weapons and would start to bomb Moscow, or something like that, but those are exceedingly unlikely. Otherwise, I think that if Russia started nuking now, chances of Ukrainian victory would drastically increase. Ukrainians will not surrender just because someone dropped one or two nukes on them, and international retaliation for such an action would be devastating, for Russia.
With regards to Western support, just to be clear, I do not think that Western support to Ukraine will be ever completely cut off (during the war, of course). Key question, however, is how MUCH will Ukrainians get, in the future. My dodging non-answer is that if Western support to Ukraine is going to be higher than I expect, odds of Ukrainian victory would be higher than those that I had given above, duh.
My mental model of the future of Western support, after seeing the results of US elections, is something like this: there will be slow erosion over time, unless Ukraine starts losing badly, in which case West would probably do something to prevent them from being completely destroyed. I recognize that this goes against conventional wisdom which still seems to be that Ukraine needs to „prove“ it can win, in order to get support; that was perhaps true for, like, first two weeks of the war and people have failed to update since then.
If there is no erosion of Western aid (including indirect aid in the form of sanctions) over time, that would mean my model is wrong and odds of Ukrainian victory are higher, but I find it difficult to give precise estimate.
Many republicans in the House have voted in favor of Ukraine aide so a slim minority probably doesn't impact things if a bill gets to a floor vote (maybe they can hold things up in committees).
West's support is widely believed to be conditional on Ukrainian performance. If Putin took Kyiv in a week as he expected there wouldn't have been that much support.
If economic nosedive continues and gets exacerbated by Russia succeeding destroying key infrastructure as it's now attempting to do, it's unclear for how long the army will be able to perform.
I don't think West currently supplies, or even will be able to supply, everything the UAF needs to sustain operations if there were no domestic economy to prop it up. I'm not even sure we're supplying the full range of munitions rather than just what feeds higher end western equipment (while their workhorses are old soviet/russian stuff).
This is a conventional wisdom, but I think now it is almost 180 degrees from the truth. Perhaps for a few days in February 2022, there was doubt whether Ukraine would follow the Afghan path and thus reluctance to support them for that reason. But now it is clear that West does not want Ukraine to fall. Whether it wants Russia to lose, that is far less clear.
So I actually think that Western support would be increased if Ukraine would suffer another defeats, but West would be reluctant to support Ukraine in e.g. reconquest of Crimea, not to mention invasion of Russian territory.
>West would be reluctant to support Ukraine in e.g. reconquest of Crimea, not to mention invasion of Russian territory
that I agree.
I'd even say Minsk 2 but with EU path and some strong security guarantees short of NATO (including western troops deployment?) right now in exchange for donetsk/luhansk would be pretty good?
I agree with Walter Russel Meade from recent CWT that real Ukraine victory is about "membership in the West"/some measure of safety from Moscow. What is doubtful is whether Russia will tolerate anything like that (given implicit goals of destroying ukrainian state/guaranteeing no west there), and if so how exactly is the eventual deal supposed to look like. It's not like Ukraine will buy Minsk 2 to be followed by another war few years later..
If you've seen anything great on the likely shape of the settlement pls link :)
The West's support for Ukraine did not waver in the roughly May-August period when Ukraine was slowly but steadily losing ground in the East and stalemated elsewhere. Rather, Western support for Ukraine *increased* in that period, with the delivery of HIMARS, Harpoon, HARM and lots of artillery.
There is presumably some level of low Ukrainian performance below which the West would say "why bother with this lost cause", but I don't think there is any realistic possibility of Ukraine suddenly failing that badly. The best the Russians can hope for at this point is stalemate, and we've seen that the West is willing to support Ukraine through a stalemate.
Including support for their domestic war economy. To the extent that Ukraine is still producing weapons and ammunition for this war, those factories are going to remain open. There's not a whole lot the Russians can do to stop them - just the partial disruption of the Ukrainian power grid has consumed an unsustainable number of Russia's small reserve of modern precision deep strike weapons, and the West can easily ship in generators to keep those factories running.
Support may not be eternal. First it's not clear that support will extend to trying to get Crimea back. Secondly it's not clear that the West has the weapon stocks to keep on supporting Ukraine (munitions for artillery in particular seem to be an issue).
Do you define an Ukrainian defeat as a) and b) and c), or as one of a-b-c ? Because it suddenly occured to me that most of the difference between your estimates and mine lies in what would be an Ukrainian defeat (or victory). For example a)+ not b) + not c) would count as a victory for Ukraine to me if not c) means joining either NATO or EU.
To be more precise, imho Ukrainian defeat would be either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s), or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO.
So it is "or", BUT I would count fulfilling one of these conditions as an Ukrainian defeat only if Ukraine would not get anything important in return (yes, the word "important" introduces some wiggle room). If e.g. Ukraine agrees to renounce its claim to Crimea but gets NATO membership, I would count that as a compromise.
I try to base my resolution criteria on what Ukrainians themselves say they would count as a victory, not on some "objective" standard. And they absolutely say that they very much want Crimea back, and I see no reason to doubt their sincerity; I know that not only from political statements, but also from talking to common Ukrainian people.
It seems harsh to call the war a loss if Ukraine makes gains compared to the status ante bellum. Giving the Russians a thorough spanking and reclaiming Donbass but not Crimea is surely not a *loss*?
There is a difference between "not-reclaiming Crimea" and "renouncing claim to Crimea for the future". If there would be a ceasefire under which Ukraine would get back all its Donbass territories and Crimea would remain disputed under de facto Russian control, that would be of course great Ukrainian victory.
But, if there would be a peace treaty, under which Ukraine gets back de facto control over Donbass in exchange for formally transferring Crimea to Russia, is it an Ukrainian victory? My impression is that Ukrainians would not think so. Zelensky would not be lauded as war winning hero if he would agree to that. Ukrainian politics and policy since 2014 has been dominated by the goal of getting all occupied territories back, which is of course completely understandable and legitimate goal, given that Russian occupation is blatantly illegal.
But imho you misunderstand the dynamics of the conflict if you think that shooting would necessarily stop if Putin would propose that Russia is ready to renounce any other claims on Ukraine in exchange for international recognition of Russian sovereignty over Crimea.
But he's not estimating the probabilities of the outcomes most of us care about. His "Russian victory" and "compromise" categories both include outcomes that I would consider substantial Ukrainian victories along JohanL's lines. They also include actual stalemates and Russian victories, of course, but with no way to distinguish between them, who cares?
It's as if, the day after Pearl Harbor, someone published a probability assessment of the Pacific War that read "5% Japan invades and conquers the United States, 95% Japan is defeated, where Japan counts as defeated if they fail to conquer the United States". True as far as it goes, but that 95% includes everything from Hawaii demilitarized and the East Asian Co-Prosperity Sphere solidly recognized as Japan's unchallengeable domain, and the Japanese language being spoken only in Hell.
Most of us would care very much about those extremes and the range of possibilities in between, which means we don't care about the estimate that lumps them all together.
Well, as a Catholic who had to learn about (yet another) sexual abuse scandal last week and has good reason to think that the local hierarchy is hopelessly corrupt let me tell you I share what you feel...
Everyone involved in or around crypto is fraudulent. If you accept this as a fact, then it becomes a lot easier to not feel betrayed when they are proven to be frauds.
Indeed. It feels a little like being gaslit to read suggestions that no one saw this coming. The warning signs are there for the whole sector, regardless if people were raising the alarm about FTX in particular (and some people were doing this).
I myself have taken a week's pay from a crypto firm for consulting work. My impression was that the people involved weren't fraudsters. They were genuine, earnest, naive, seemed to have pretty high IQs, and were also unfathomably stupid. I didn't return the money but I ran at the first opportunity and feel some lasting discomfort that I allowed myself to become involved even if briefly with a sector I am profoundly sceptical about.
Effectively Unethical: support for crypto-based organizations in any form which have no audit transperancy or which do not conform to generally accepted accounting principals
Wow... In many ways I have no idea what you are kvetching about. Crypto has collapsed and some people are suffering? There are so many suffering for other reasons around me, why should I care about crypto millionaires? (I find a lot of EA stuff kinda dumb, give locally.)
Likely most FTX users were not millionaires. A lot of people used it as a bank (crypto generally positions itself as an alternative to traditional banking, and FTX for example offered a Visa debit card and ~8% interest on deposits).
I had a couple paychecks' worth in there. Fortunately I read the last open thread where I found out about the trouble it was having and was able to pull my stuff out before it fully fell apart.
My fiat currency was USD, but the majority of my balance was Dogecoin I'd bought for fun back in 2014.
(Admittedly while I initiated the USD withdrawal, I doubt it will complete. 90% of the crypto I was able to move immediately though; the exception was TRX for which I was not able to get a wallet address in time.)
On both fiat and crypto. There was a threshold above which one's deposit would receive a lower return, but I wasn't near that level.
As for the card, I didn't have one myself but they advertised that it exchanged crypto for cash at point of sale. (This appears to work similarly to the cards advertised by other exchanges like Coinbase and Binance.)
So with the debit card, you presumably wouldn't know (without doing your own calculation) the crypto cost of the purchase at the time of the transaction? I suppose that's no different from buying an item in $ using a £-denominated account, except that intra-day movements in USD:GBP tend to be modest.
USD inflation is currently 7.3%, so a nominal interest rate of 8% is only 0.7% real, which I can understand might be seen as modest, but it's still well above commercial rates, so how did you understand the money was being generated? Is it that FTX was (or was understood to be) lending the deposits out at some higher rate of interest, like a bank? If so, who are the borrowers?
Or is the threshold you mention at quite a low level? Barclays currently offer a savings account which pay 5.12% on balances up to £5k (for their current account holders only). Clearly they lose money on this, because they're paying 2.12% above base rate, but that would equate to at most £106 pa. I highly doubt they will maintain that offer long term, so it's a modest cost the bank pays to acquire customers, just as other banks offer a £200 sign-up bonus.
Sorry for asking all these questions, but I don't really have much sense of how FTX was intended to function.
It's in the ballpark of what I've had from high-yield savings accounts with traditional institutions in the past (which admittedly slumped for a while but are apparently ticking back up to ~3.5%–4% these days).
It was also somewhat on the moderate-to-restrained side for what crypto enterprises often promise—Anchor was returning more than double that around the time I signed up with FTX, but that _was_ obviously unsustainable and fell apart a little while later.
Yeah, but you’d also be banking the increase in value of your crypto as an unrealized gain, right? What was the cost basis upon which they paid you 8% interest?
My understanding was 8% in terms of each currency or coin itself, irrespective of its relation to the dollar, and the tally of interest paid out to me in each coin was in the ballpark of that. But I don't have the exact numbers or the fine print handy to confirm.
Ok, I get it. If it is at all like you describe it, that is a pretty swift piece of arbitrage on the part of that exchange, if it runs in their favor. Which apparently it doesn’t anymore. I’m sorry for your loss. I hope it doesn’t put too big a dent in your peace of mind. my wife and I just got completely screwed by a common- or garden-variety building contractor. You don’t have to be a crypto genius to screw somebody. Pertinent to this situation and a lot of the preceding thread (re can people be wrong about their own experiences) I am deeply wondering whether he really meant to screw me or whether he was just deluding himself in some way that I got hurt by. I am experiencing all the self recrimination that others here are experiencing about whether or not I should’ve seen it coming. I reckon anything I take on faith is my responsibility,; I could’ve done a lot more due diligence before I hired the guy but for all kinds of reasons I didn’t. I’m still sifting through that bag of old clothes. you’re not alone.
I also question how effective a lot of SBFs donations have been. Dozens of millions of USDs to political campaigning in America sounds about as effective as any average charity.
>"The end does not justify the means" is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn't think this way.
That's a very roundabout way of saying "consequentialism is self-defeating, so become a deontologist".
Problem with deontology is it is pretty much silent when it comes to what kinds of good acts to do. Give five dollars to a beggar, or give it to Givewell?
Eliezer has said he thinks people should be 75% of the way between deontology and consequentialism. I might quibble with the actual numbers but I think that's a good way to think about it.
Realistically, I think, the real issue here is weaponization. As soon as a rule exists for how you are supposed to give to charity, we can use that (just like we use everything else) to demonize any person or group we don't like...
The only out is to condemn weaponization per se. I have my theory of how I want to give, you have your theory, and let's both try to be decent people according to our theories...
Explaining why you think your theory is optimal? OK.
Explaining why my theory is not optimal? uhhh...
Insisting that there's only one honorable theory? Yikes, that crosses red lines.
Weaponization is when the primary use of a moral principle becomes to attack *others* rather than as a guide to oneself.
It's basically the fate of every "bright-line" moral principle; the first generation (self-selected, by definition someone unusual people) pick up the principle as a way to live better, a later generation (mostly "normal" people with the normal social instinct of caring mainly about an us and a them and hurting the them) switch to using the moral principle not as a guide to their lives but as a way to attack others.
We are living through this right now with "Woke". But it's happened many times before. Look at the Reformation. Look at the Cultural Revolution. Look at Stalin's regime. Look at Donatism.
It's going to be super hard to come up with a theoretical foundation for this, though. Most ethical theories start with a theory of value, and trying for a blend here won't be easy.
was just discussing in another context: mono-paradigm thinking is in general a very bad idea. please don't. use a mixture. each part has some good arguments for it so it's fine. meta-paradigm is that no argument can be complete and so no mixture weight can really be zero.
Sure, but rules-utilitarianism is pretty different from deontology. Rules-utilitarianism still accepts the value-theory of utilitarianism, and merely proposes that attempting the ethical calculus won't work. Deontologists will still be apalled.
My opinion is that two-level utilitarianism is probably the most sensible - we accept established standards and rules of thumb in most cases (aware both of the difficulties and the risks of bias in doing the actual calculus), and only break out the attempts at hedonic calculus when we feel that they're insufficient in some way or when we're lead to question the rule of thumb.
The plot of A Christmas Carol has ascended to become a pretty standard stock plot (https://tvtropes.org/pmwiki/pmwiki.php/Main/YetAnotherChristmasCarol). What other plots created in the modern era have done this? TVTropes suggests It's A Wonderful Life, but I think that's a bit different - creators usually just copy the device of a character being shown what the world would be like without them, rather than the entire plot.
Besides A Christmas Carol, Gift of the Magi is another obvious stock plot (though nothing is near as ubiquitous as ACC). I'm surprised noone as mentioned it yet.
I am not really sure this is an example of borrowing a plot to create any work. It looks much more like taking a plot that has a particular setting in the original and reimagining the setting I mean, would you say setting Shakespeare’s play Julius Caesar in the fascist militia of the 30s is like a new version of the plot of Julius Caesar? In the purely schematic reading, there aren’t really that many plots it’s in the details that they acquire their individuality.
I can’t remember who it was, but someone did propose that there were really only two plots; someone leaves home or a stranger comes to town.
Borges suggested there were really four plots: the siege of a city (Iliad), the return home (Odyssey), the quest (Argonauts), and the self-sacrifice of a god (Attis, Odin, JC superstar)
Id like to see an analysis of Law and Order plots and how many are the exact plot with different minor details. They probably have a dozen templates or so that just get rotated through.
Air Force One? Die Hard in a Plane. Under Siege? Die Hard on a Navy ship. There's an apocryphal story that this eventually came full circle, when someone who hadn't seen the original pitched his idea as "Die Hard, but in an office building."
This is a fascinating thread. I assume everyone here is right, I certainly haven't consumed a sufficient fraction of world media to say otherwise, but I've never seen a plot imitation of either A Christmas Carol, Cyrano de Bergerac, Groundhog Day, or Rashomon. Numerous film versions of the first two, for sure, and tons of Shakespeare burglaries, of course. Same with the Connecticut Yankee plot, that one's ubiquitous in SF and fantasy.
I know about Roxanne, but to be frank I see it as the same category as "look how creative I am, it's The Tempest but I put people in suits" and Baz Luhrmann's Romeo and Juliet: it's not an imitation, it's just the original recostumed.
Roxanne is a very fun adaptation of Cyrano. And I've seen the "smart ugly guy feeding lines to the good looking dumb guy who's wooing the woman they both love" scene played out a lot of times in different places.
The plot of Groundhog Day is essentially a person is doomed to repeat the same mistakes/ go through the same motions/ endure the same torments over and over again, until some realization or a change of heart or intervention allows them to escape.
I could argue that it is exactly the same plot as the myth of Prometheus or Sisyphus, except he never gets the payout
Groundhog Day is interesting in that there's two strains to it. There are stories like Groundhog Day or Palm Springs where the protagonist is trapped in a timeloop and the goal is to stop looping, but then there are stories like Edge of Tomorrow or Source Code or Re:Zero, where the timeloop is a tool the protagonist is using to solve some other more action-y problem, using their foreknowledge and infinite retries to explore all the angles until they find a way to win.
I suspect that the latter type owes as much to video games with the ability to save and reload as it does to Groundhog Day itself.
Rashomon Was largely based on a Japanese short story called “In the Grove” published in 1921. One of the influences for that story seems to be a long poem by Robert Browning, called The Ring and the Book, which he based on the written account of a real crime that took place in Italy in the 1600s.
I agree that Vice Versa is the best answer, but merely for the sake of trivia, I want to point out a humorous poem, “Grandpapalittleboy," from D'Arcy Wentworth Thompson's 1864 volume Nursery Nonsense, or, Rhymes without Reason. Much like Vice Versa, Freaky Friday etc., the poem revolves around the absurdity caused by an adult and chid switching bodies, and begins:
Last night , when I was in my bed,
Such fun it seem’d to me;
I dreamt that I was Grandpapa,
And Grandpapa was me.
But the poem is so slight, that I second Vice Versa as the urtext, and only mention Thompson as a colorful aside.
The Hangover (person wakes up in highly unusual / plot hooky circumstances with no memory of the immediately preceding events, must piece together what happened while also facing some external deadline)
I think Twain invented the concept of "time traveller uses modern science to defeat enemies and change history" in A Connecticut Yankee in King Arthur's Court. It pops up all over the place now,
A Christmas Carol was 40 years earlier. Although Scrooge didn’t interact with the past he time travelled to the past, and to two futures. Neither future happened so it was effectively a parallel universe story.
Campbell was *absolutely* wrong. Classic example of filing off inconvenient parts of your data to fit the thesis. There's also a bunch of astrology-style vague statements that are easy to fit to lots of different cases, such that you end up with a theory that seems insightful but actually means nothing.
I know Lucas talked a big game about it, but I wonder whether it's really more justifiable to say "Star Wars is based on Campbell's monomyth" than "Star Wars is based on The Hidden Fortress" or "Star Wars is only good because Lucas' wife at the time and a number of other prereaders heavily revised his script", or even "actually Star Wars is not good except in the sense of being visually spectacular, even Harrison Ford hated the dialogue".
I'm honestly a lot less invested in/knowledgeable about Star Wars than one is meant to be as a dork, so I don't know at all in what proportions which of these claims are really justifiable.
How does one protect oneself against vice and the inherent corruption of power assuming one wants to be a force for good. Is the model that the FTX folk started noble snd then went off or that it was always or for a long time a grift beginning perhaps with the very concept of earn to give
A solution I've seen in the technology space is "give people the ability to walk away from you if they feel you've become tyrannical".
Vitalik Buterin (creator of Ethereum, the 2nd largest cryptocurrency) gave away the overwhelming majority of his wealth during the 2017 cryptocurrency spike to a combination of GiveWell and the Ethereum Foundation. This bought him a lot of goodwill within the Ethereum development community, but critically made his continued influence within the community contingent on maintaining that goodwill.
Crucially, in all the open source cases, there's *nothing* keeping the entire community from forking the whole project and shutting out the benevolent dictator, save a group consensus that the dictator is doing a good job of leading the project.
few people of the inner circle, barely in their 30s, on drugs daily, living on bahamas in some shared apartments entangled in the web of polycules, with access to billions $.. should have never raised any concerns right ;)
So, I have a bit of an inferiority complex generally and maybe a chip on my shoulder where I assume people in these companies have to be brilliant (my hope is that here counteract and I come across as normal) but… I work in a normie mainstream bank that does stuff in dollars. I have been audited harder over whether or not we inappropriately charged someone $0.11 than these guys apparently were over billions of dollars in capital. I have pretty broad authority to do some stuff but definitely not everything I can imagine and I also know I will eventually have to explain myself to somebody else and there’s always somebody looking over my shoulder. Like, it’s painful to have to share report coding with some auditor who has no context or idea of what you’re doing or how it really works but you have to do it so that it turns up stuff like this. Just the idea that the founder is the only person able to alter the source code fails a lot of basic maker/checker stuff and that would have come up on a really low-fidelity review deployment review. I would have quit before I passed an audit that didn’t have that as a finding.
"I have been audited harder over whether or not we inappropriately charged someone $0.11 than these guys apparently were over billions of dollars in capital."
Ditto here in my time as minor government bureaucratic minion, where the auditors came annually to look over the books and we *did* have to have an explanation for "the quote was for €30.00, the invoice was for €30.50, why did you pay that extra €0.50?". I did an accountancy course where a lot of the class were also public/civil servants, the guy teaching the course came from private industry, and when one of the class was out by about €5,000 on a sample problem he said "don't worry, this doesn't matter in real life" and we all laughed and explained that in our day jobs we had to account down to the last cent, because dealing with the public money is not like private companies.
Part of the problem was that the people involved were all well-connected, so it was no problem to get Dad to ring his old buddy in the SEC and get him to let you decide what should go into the regulations.
I also wonder if there might have been a status anxiety thing at play. No one making a “mere” six figures wanted to challenge the billionaire wunderkind. I just can’t see how this wouldn’t turn up otherwise.
I’m relatively new to this blog and the broader EA ecosystem, so I think I’m missing some background knowledge/philosophy about prediction markets.
The idea of crowdsourcing whether or not to trust a public figure strikes me as extremely unreliable, like you’re explicitly opting into trusting mob rule and potential mania over your own judgment. Every day on Twitter, someone’s reputation gets temporarily ruined over nothing, and it seems like the entire point of trusting someone is that you wouldn’t change your opinion on a dime just because some anonymous people are freaking out.
If the idea is that putting money on the line guards against this kind of mania, I guess I’m skeptical. We see irrational exuberance in more liquid markets all the time. And in the case of someone like Vitalik Buterin, it seems like you could already bet against his trustworthiness by shorting Etherium; if you don’t think current market dynamics reflect his trustworthiness, why would you expect this much less liquid market to give you more insight?
I’m sure this isn’t some brand new critique of prediction markets, so is there a good write-up somewhere that explains whatever I’m missing?
Current market dynamics reflect our civilization's collective best guess about Buterin's trustworthiness, and likewise they did regarding SBF's. Yes, it turned out that it was wrong, but people who knew better could've made money on their knowledge. Given that they didn't, they don't get to say that they knew better. The whole idea is not so much about getting perfect insight, it's for people to put their money where their mouth is, so that in the future we can easily judge their track record.
I am halfway done with a prediction market FAQ, so I am going to avoid answering this in the hopes that the FAQ answers it better later. Other people can preempt me if they want.
It is curious to me that everyone is trying to blame SBFs philosophy to what he did.
The closest examples to individuals that did what SBF did were the rogue traders (Nick Leeson of Barings Bank, Jerome Kerviel of Societe Generale, and Bruno Iksil of JP Morgan are three biggest ones), yet I don't think anyone tried to say that Irish Football was linked to Nick's reasoning in doing what he did.
So it's interesting how much EA is being mentioned in the conversations of SBFs motivations to do what he did.
"The effective altruists I know are really excited about Carrick Flynn for Congress (he’s running as a Democrat in Oregon). Carrick has fought poverty in Africa, worked on biosecurity and pandemic prevention since 2015, and is a world expert on the intersection of AI safety and public policy (see eg this paper he co-wrote with Nick Bostrom). He also supports normal Democratic priorities like the environment, abortion rights, and universal health care (see here for longer list). See also this endorsement from biosecurity grantmaker Andrew SB.
Although he’s getting support from some big funders, campaign finance privileges small-to-medium-sized donations from ordinary people. If you want to support him, you can see a list of possible options here - including donations. You can donate max $2900 for the primary, plus another $2900 for the general that will be refunded if he doesn’t make it. If you do donate, it would be extra helpful if the money came in before a key reporting deadline March 31."
"In the crowded primary for the newly created 6th Congressional District in Oregon, first-time candidate Carrick Flynn has attracted over three times as much outside spending as any other House candidate this year.
The lion's share, over $10 million, came from the super PAC Protect Our Future, established by 30-year-old crypto billionaire Sam Bankman-Fried, the founder of crytpocurrency trading platform FTX.
Flynn and Bankman-Fried are both members of a philisophical movement known as effective altruism — as part of a network of researchers and philanthropists, they're dedicated to working on the truly big threats to the future of the human race: pandemics, climate change and nuclear weapons, for instance.
...If Flynn does pull out a primary win, he could be the first effective altruist in Congress, and effective altruists are playing a very long game. Their "30,000-foot perspective" takes an exponentially lengthier view. One essay associated with the movement calculates that if the human species lasts as long as the average mammalian species, around 100 trillion more of us could live and die over the next 800,000 years. Some members are debating what the "foundations for space governance" should entail — not exactly topping any issues polls with voters anywhere.
Bankman-Fried was practically born into this realm of thought; his parents are both Stanford law professors with an interest in utilitarianism. He went to MIT, and at the beginning of his career, he worked at the Centre for Effective Altruism. By the time Bankman-Fried celebrated his 30th birthday, he had built a crypto exchange with a multi-billion market capitalization.
...Bankman-Fried has argued that more effective altruists should steer themselves toward making a positive impact on U.S. policy and he was one of the top two contributors to Biden's campaign in 2020, just behind fellow billionaire Michael Bloomberg. This April, when he appeared on a podcast that's big among effective altruism devotees, he was bullish about the power of outside spending in primaries: "The amounts spent in primaries are small. If you have an opinion there, you can have impact."
So the associations between EA and Bankman-Fried were being made from the start, this is not simply "Amongst the other things he threw money at, like sponsoring sports, he made donations to this charity".
Nick Lesson became a commercial director of Galway United ten years after Barings. So there’s clearly no relationship there between his illegal acts and his subsequent career.
I think most of it's that non-EA people think of EA as being very high-and-mighty and presumptuous, since the core of the philosophy is saying that actually you *can* directly compare charities with one another; and after deciding which ones are best, you donate to those.
This is implicitly very unflattering to (as a first approximation) everyone who's interested in any other charity or cause area.
Nah, they think that EA is presuptious not because the notion that one charity might be objectively better than another is inconceivable to them, it's because they dismiss the idea that some outsider nerds could possibly be authorities on this.
To my knowledge, none of the people you mentioned crafted an extensive public persona about being the Philanthropy Billionaire who earned money to give it away, ran massive pseudocharitable foundations etc.
Exactly. Yet they did something similar to what SBF did. And we dont know the extent of the damage, but FTXs losses could potentially be less than the losses they incurred adjusted for inflation.
So why do we blame SBFs Philanthropy persona and not Leeson's Irish football coach persona?
Because Nick Leeson did not do what he did in order to get a job with Galway United, nor did he have public interviews talking about his love of League of Ireland football and how this influenced him into wanting to make tons of money to support it, nor did Leeson get flattering puff pieces about how he was a millionaire philanthropist.
Because SBF's philanthropy persona is all about acquiring large sums of money and redirecting them to where SBF subjectively thinks they do the most good, apparently even if that means stealing billions of dollars. I don't think that soccer is remotely comparable.
Also from what I can tell he didn't become a soccer CEO until after he was out of jail for the fraud.
I agree that EA should not be blamed, but it’s not crazy to say that his overall worldview, and potentially that of most of defi, is to blame. The rogue traders that you list did what they did despite and in flagrant avoidance of dozens of external regulatory and internal risk controls, which is clearly not the case here. The philosophy of the institutions about risk and compliance in the two scenarios is vastly different.
There is a lot of Chesterton’s fence demolition that is central to defi, and a reevaluation if not outright rejection of a lot of what have become first principles in traditional finance. So when the question is asked “should we steal from client accounts to support our white knight rescues of the future of finance?” it’s a lot easier to end up in the cost-benefit failure mode that Scott identifies rather than the clearly correct deontontological answer.
I want to write an article about this, but I think the thesis will be something like:
There is nothing particularly valuable about crypto except as an unregulated sub-economy for people worried about badly-intentioned-authoritarian or well-intentioned-regulatory interference in the regular economy. If you agree that this escape valve is a good thing to have, you should support crypto (and want it unregulated, and accept that any specific project operating on it is plausibly a scam). If you disagree that this escape valve is a good thing to have, you should ban crypto. There's just no reason to want crypto to exist but also regulate it, and I don't understand why everyone thinks this is the reasonable compromise solution.
Nuanced premises rarely are. The widespread attitudes toward crypto are "it's all scams and terrorists and drug dealers" and "don't give in to FUD, crypto is still the clear path towards a glorious utopian future".
There’s a good argument for advancing crypto technology to be such a backup plan. But an unregulated financial infrastructure, even if intended only for good uses, is likely to be overwhelmed by actors who just want to take advantage of the unregulated nature of the system. That in turn would bring about calls for regulation. Maybe the best chance for an equilibrium state is one that is regulated for the most part, but projects that carry the original benefits survive on the fringes as long as they don’t get too big and attract much attention.
It is worthy to note that the valve has existed for decades (in the form of suitcase finance), but regulation around AML and / TF has closed that valve considerably, right around the time crypto became more mainstream.
I think "have less regulation" is going to very naturally be linked to "have more scams", unless regulation is literally useless.
I agree that we should be trying to change that, but I think that looks like better/more interesting crypto technology to make things that are both hard-to-regulate and hard-to-scam, and that "support better/more interesting crypto technology" looks a lot like "support crypto", plus half of the new technologies will turn out to be scams.
The particular example I'm thinking of is better/more convenient noncustodial wallets (so that your money isn't on an exchange which can collapse or defraud you), but I am stealing that from Vitalik and I don't actually understand a lot of the considerations here.
I think crypto has proven itself a bad method for avoiding regulation. If nothing else you're not really getting away from regulation. Crypto is regulated, it is just regulated by it's code rather than by the law. In essence, you're replacing the risk of the government changing the law on you, with the risk of trap clauses in the codebase. Now, if you live in a country where the government regularly messes up the economy maybe that hedge is reasonable, but it would have to be an extreme threat to match the risks of crypto.
You might be right about external regulatory controls, but I don't really agree with internal risk controls. There is no way SBF could have done what he did without serious skirting of his own internal risk controls. It would have leaked way sooner.
And FTX was never DeFi. I have never considered DeFi and neither have most people in crypto. FTX was a brokerage that allowed people to trade crypto more efficiently and using complex financial instruments. It had nothing to do with DeFi.
Early reports may turn out to be wrong, but some of the unverified rumors that have come out include that SBF was granted direct uncontrolled access to official books and records, and the ending “hack” may have been facilitated by internal technology staff. When that kind of thing happens, it’s not a skirting of internal risk controls, it’s an intentional decision not to have effective risk controls.
I'd say that preferring "classy" entertainment is a *little* sus. I'm sure there are people who actually just like it, but there are also people who think that they *should* like it because that makes them Better than others.
Sincerest condolences, Scott. For whatever it is worth from an internet stranger having the part of you that’s vulnerable enough to open up to other people and get stepped on is just about the worst. But you were never wrong to want to see the good in people and then for working with them to do good.
I’m not much of a joiner but this doesn’t make me think less of Effective Altruism for whatever that’s worth, as I know that’s near and dear to you. Once you have a group of people get to a certain size you can’t just filter out these kind of random human behaviors effectively. Was bound to happen, eventually.
From what I understand, SBF had complete control over the code base for FTX. I’m kind of amazed that didn’t come up in a simple audit of their governance but then from what I understand (I listen to the All-In Podcast, and read some articles on it) it sounds like a lot of the VC’s didn’t even conduct that level of diligence. It’s like he had a magical wand to touch all the money in all the FTX accounts any time he wanted to make all his other problems go away. That would be a weird question for someone who doesn’t work in finance to ask and certainly you can’t be at fault for not having asked it.
There is a really interesting question of why billionaires keep doing business after they've made much more money than they could ever need. I think one answer is that most normal people probably don't, and retire after getting moderately rich, and the people who actually become billionaires are all weird in one way or another. I assume most of them are very competitive, or psychologically flawed, or just want to see what will happen. But wanting to donate the money could be another excuse.
I suspect that billionaires start new businesses post-wealth largely because they're interested in the projects those businesses represent, rather than just because they're hoping to make more money.
Examples where I think profit was secondary to why the company was founded:
* Blue Origin
* SpaceX
* Neuralink
* Mark Cuban's Cost-Plus Drugs
* (arguably) Mark Zuckerberg and his pet VR projects. I mean, it's *basically* a completely new business using stockholder money, mostly divorced from the core social media empire, which Mark can do because he's a majority owner of the company.
* OpenAI
A common story seems to be a billionaire saying "X technology seems like it could be badass if it was developed a bit further, but nobody's working on it in a way I approve of. So I guess I may as well use some of my absurd wealth to do it myself."
I bet that only if there's no hope of the enterprise turning a profit does the billionaire make a charitable foundation for it (since at least that way you get the tax benefit, even though you have less freedom of action than with a private company).
"Mark Zuckerberg and his pet VR projects. I mean, it's *basically* a completely new business using stockholder money, mostly divorced from the core social media empire, which Mark can do because he's a majority owner of the company."
No he's not. Zuck owns about 17% of the company. The reason he can do all this shit while the other main owners can only pull their hair out and wail is that they stupidly gave him a golden-ticket arrangement for being a genius founder, where he has voting control despite owning a palpable minority of the stock.
One of my favorite things about this place is that I get to hear about all kinds of things I didn’t already know, and that everyone here thinks a lot differently than I do.
So, on St. Petersburg, which I read as a sort of “the road to hell is paved with good intentions” but with math…
Whenever I find myself getting sucked into a paradoxical vortex like St. Petersburg, I remind myself that the universe is something like “all the principles and paradoxes that there are, competing with and running into each other in ways that are probably not entirely calculable” and also that I am not a super computer. So whenever I stumble across something that seems Deeply Troubling I look up and remind myself that the universe seems to be working okay and just because I found an interesting sort of answer to a weird problem doesn’t mean I knew the right question or context. That might be saying “don’t think too much” or “don’t be so sharp you cut yourself” with extra steps but I think it is the eject button all of us have to press at some point.
It’s reasonable that SBF probably got a lot of signal that the way he was thinking about things and focusing on those small little things would lead to success, and he kept pulling the thread until it led him off a cliff. Until you’ve really truly internalized the understanding that you personally can be an asshole, and not just like accidentally but on purpose, it’s a hard thing to avoid. My guess is he probably talked himself into all kinds of pretzels about how it would be okay.
On the other part for why billionaires keep doing it:
I think we’re all just machines that solve problems. Take away the problems and none of us know what to do with ourselves, or at least that happens after a while. That doesn’t have to be grim, by the way. Lots of those problems are things like “what’s the most fun I could have surfing today” but other people need to be productive outside of themselves. Like my roof started leaking from some storm damage the other day and I almost got excited because that means I get to go up and fix it and I’m kind of nostalgic for when I used to work construction. I like to fix things with my hands.
I’m middle class (probably poor by the standards of some people here, but I’m incredibly wealthy by the standards of my childhood so it’s about all the wealth I can emotionally grapple with) and everything about being middle class is perfect to me. It just fits like a good pair of shoes. I have a job that takes enough of my attention I don’t feel like I’m doing nothing and easy enough that I still have lots of time left for my family. It gives me a structure to set a good role model, but also I get to work from home, so one day I’ll be able to watch him run around the yard from out of my office window. If you had told me when I worked on a drilling rig that life could be that good I wouldn’t have believed you. The only thing that makes me unhappy at all is when I look at the wider world and see things that look like signs of systemic rot because then I think about how my children will have to deal with them.
I’m sure people like Elon get the same thrill from watching Teslas roll off the assembly line or when rockets shoot into space and you just get the feeling that this is the problem on Earth you were meant to solve. I do try to hold in mind that everyone is just “some guy” at the end of the day and everyone is psychologically flawed to a greater or lesser extent. I don’t think you’re ever “done” when you have a purpose like that although it can certainly magnify the level of mistake you make. We’re all processes, not products.
Anyway, SBF probably found his purpose with the crypto exchange and just lost his way.
Have there been any serious studies on who makes up the audience for interracial porn?
The obvious answer would be black men, but the way IR porn tends to portray black guys as animalistic beasts, and the emphasis on the girl's degraded state after the fact, suggests that they're not the intended audience.
Women don't tend to seek out visual porn, and when they do, I don't think it's of the "tiny girl smashed by five thugs" variety. I don't think they're the intended audience either.
Perhaps it's the American equivalent of tentacle porn, and it's for white men who like seeing white women degraded by beings they (consciously or subconsciously) view as nonhuman.
But there are a few questions here. It's usually taken for granted that men watch porn to insert themselves as the male in the video. But does this really work if the man is of a different race than the viewer? What if the main viewers are gay white men who instead actually insert themselves as the girl in some kind of sadomasochistic fantasy?
There's also the argument, popular in alt-right spheres, that IR porn is actually propaganda by certain sectors to demoralize white men. I'm agnostic on that view, and would be happy to hear arguments for and against it. Whenever I see these videos on porn platforms, they tend to have view counts high enough to suggest a fair number of people watch this stuff, whether or not it was manufactured as propaganda.
I always feel that the people who think that the main audience for interracial porn, or gay porn, or any other politicized category of pornography must be straight white Republican Christian men are just telling on themselves. The reason people qualify these theories with "strong suspicions" and "hunches" rather than verifiable facts is because these theories are usually based on nothing more than the smug desire to believe your political enemies have humiliating sexual hang-ups that further justify your own feelings of superiority. It is in its own way a form of mental pornography.
The obvious answer is probably the correct one. A lot of men do watch and enjoy porn that emphasizes the girl's degraded state, and if anything I would expect the appeal to be even greater when the girl so degraded is of a rival race.
I'm not a big fun of the idea that right leaning crowd are the main audience for interracial porn, as I already said, I believe it's mostly women. But... aren't you actually making a point in favour of this theory?
The ideas of "rival races" and that having sex with multiple men is degrading for a women are much more likely to be hold by people on the right than on the left. So your obvious answer is disproportionally pointing to the same group: white conservative men, though not necessary Christian.
A significant part of the population has a violent, instinctual drive when seeing a woman of "your tribe" with a man of "foreign tribe". It's almost like jealousy in that it's hardwired and happens regardless of your worldview. Or perhaps, the worldview is downstream from the instinct.
Well I remember feeling something like this in my salad days before I actually took the time and effort to reflect on this part of my psyche and become a better person. It's really not the case that this happens regardless of your worldview.
Sure some people can have a naturally stronger version of this feeling but I also think there is a self reinforcing pattern here based on whether you endorse and identify with the part of yourself that experiences this tribal jealosy or not. Feeling affects the worldview which itself affects the feeling and so on. As a result people with strong feeling about rival races tend to have according worldviews and thus are more right leaning.
I'm not sure. I know it's completely baseless and lowkey hate my own ethnic group anyway (out of excessive exposure to them if nothing else), but I still feel on edge whenever I see a foreigner with one of "our" women. I'm not sure one can reflect their way out of it.
I expect that you can achieve some improvements through CBT-like practises. Maybe not not a point where you do not feel anything at all but still.
But even more so, I expect that if you had thought that it's completely reasonable to feel this feeling and identified a lot with it, you would have experienced it even stronger.
I don't think you're understanding my point. The obvious answer is that men of race A are likely to find pornography where men of race A sexually degrade women of rival race B titillating, as it appeals to fantasies of domination, victory and racial revanchism. This is the instinct behind bride kidnapping and wartime sexual violence throughout history, simply sanitized and commercialized for mass consumption.
If the porn in question was about white men and Asian women, I don't believe there would be this much hemming and hawing about who it primarily appeals to. But the good readers of astralcodexten are not so gauche as to believe black men operate on the same sexual urges as all other groups of men throughout history.
Do you understand, though, that such categories as "sexually degrade" or "rival races" are not objective but based on the perception of the viewer? To be actively turned on by the "sexual degrading of a member of rival race" one have to have a peculiar way of perceiving reality in the first place. And this way of perceiving reality seems to be more common among right leaning crowd which are mostly conservative white men. Thus it's reasonable to expect that they consume interracial porn, probably the one with white men and black women, but maybe the opposite configurations as well, for the sake of breaking taboo.
I think this peculiar way of perceiving reality is a lot more common than you believe. I know that nobody in this comments section ever thinks of races being in rivalry with each other, but I'm not so sure the general population shares this aversion.
Voting habits, colored by race relations, belie the fact that plenty of black men have "right-leaning" views.
If I may, you may be marginally mis-reading the thesis of this Ape in a coat here Florian. No comment is being made on the common-ness of "this peculiar way of perceiving reality". Might be no one. Might be everyone. It appears somewhat irrelevant. Instead, a comment is being made on the nature of user attraction.
Your theory ends up suggesting that right leaning crowds would favour a certain kind of porn. Which is the very thing you start out by stating your theory disagrees with. You create a loop, is what Ape is commenting on, and what caught my eyes as well.
Perhaps if I rewrite the flow a bit it becomes clearer? I find it really intriguing.
You state:
-- " I always feel that the people who think that the main audience for interracial porn, or gay porn, or any other politicized category of pornography must be straight white Republican Christian men are just telling on themselves. " --
Then you give your reasoning why this suggestion is in error. Eg, attributing unsavoury characteristics to your thought-enemies is somehow titillating, and justifies a sense of superiority. Definitively something I agree with - it is maliciously easy to assume that people one disagrees with are all secretly [Something Bad]. It's a bit of a mental masturbatory trap. I might even phrase it as -- wait, you've done it for us:
-- " [. . .]It is in a way its own form of mental pornography. " --
Then you offer a fairly reasonable alternative explanation of interracial pornography:
-- " The obvious answer is probably the correct one. A lot of men do watch and enjoy porn that emphasizes the girl's degraded state, and if anything I would expect the appeal to be even greater when the girl so degraded is of a rival race. " --
So if I may paraphrase your statements here? You note that there may be more obvious explanations for why interracial pornography appeals than the "mental pornography" of attributing a shameful sexual fascination to political opponents. It's just because some people like that kinda thing. Someone might watch a member of a rival race being degraded because it appeals to the fantasies of domination and victory. Men of Race A are likely to watch porn where men of Race A pound women of race B into the ground. Might be it appeals to some instinctive urges.
As you write:
-- " The obvious answer is that men of race A are likely to find pornography where men of race A sexually degrade women of rival race B titillating, as it appeals to fantasies of domination, victory and racial revanchism. This is the instinct behind bride kidnapping and wartime sexual violence throughout history, simply sanitized and commercialized for mass consumption. " --
The mild trap here is what Ape in a coat points out. You're saying that the obvious appeal of interracial porn is to watch members of a differing race be degraded.
Ape is noting that in order for that to appeal, someone needs to have the categories of "rival race", "degradation by sex", "interracial conflict" and a fair chunk of other associated mental models. In order to enjoy the thing, one does need to perceive the thing as existing.
And the people who tend to have those mental models tend to be straight white republican conservative men. You know, the kind of people who might be inclined to think about "rival races" and "interracial rivalry" and consider the varying the sexual urges of varying races.
So you're saying that the /idea/ that straight white republican conservative men are (one of the) primary consumers of interracial degradation porn is incorrect because they're . . . The kind of people who would be the primary consumers of interracial degradation porn.
And actually, one might even suggest here that you're saying that the "mental pornography" of ascribing "humiliating sexual hang ups" to people one disagrees with is incorrect because . . . The kind of people who would be interested in the shameful sexual hangup of raceplay is precisely the kind of people who tend to have political ideologies that trend towards politicised notions of race.
It's just a little funny, is all. You're saying the theory is wrong because of all the reasons that make it not be wrong.
I don't know, rival races in a zero-sum competition seems a lot like critical race theory!
But my guess is that those inclined to obsess upon the significance of the races of porn actors (as distinct from appreciating a bit of variety) are likely to fall in a pattern compatible with horseshoe theory.
Other people have already dealt with the weirder explanations, and pointed out that women (including lesbians, oddly) can be into interracial porn. I think a big market is white men, though, and I have strong suspicions around the connection between interracial porn and right-wing or conservative political leanings. It's not just Paul Manafort with that particular mix.
"It's usually taken for granted that men watch porn to insert themselves as the male in the video. But does this really work if the man is of a different race than the viewer?"
I would say both of these statements are questionable. Given porn with only women is also quite popular, it's not clear that self-insertion is the motivating factor. Nor is it unbelievable a man could insert himself in a person of another race. I doubt the average viewer looks much like the male porn model in many other ways, such as age, size, fitness, etc, but race is somehow a bridge too far?
"What if the main viewers are gay white men who instead actually insert themselves as the girl in some kind of sadomasochistic fantasy?"
It seems much more likely they would be watching interracial gay porn.
PornHub does fairly extensive research on this type of thing with their data. But I am not sure if they address this questions specifically and I don't really want to research it on my work computer...
I'm not sure if the animalistic part is the problem. Bad romance novels about "cute naive maiden & big bad werewolf" are, apparently, catnip to a large market segment.
> Women don't tend to seek out visual porn, and when they do, I don't think it's of the "tiny girl smashed by five thugs" variety.
According to my girlfriend such genre is exactly what a huge demographics of women seek. Especially if it has good storytelling from woman perspective.
My totally unfounded working theory is that any particular porn subgenre has a strong audience for whom it is taboo. Interracial porn is watched by racists, transgender porn is watched by transphobic people, and BDSM porn is watched by prudes. The prevalence of step-sibling porn is because it's just on the edge between "taboo enough that it's exciting" and "not so offensive that you feel like a freak for watching it".
"BDSM porn is watched by prudes" does not seem to match the pattern set by the rest of your examples. Wouldn't something like "BDSM porn is watched by people who dislike the suffering of others" be a better fit?
That describes a lot of people. I think the person's social environment has a lot to do with what taboos they find exciting, so if you're a bleeding heart hippie, probably no one will care whether you have a mild kink, because conspicuously disliking the suffering of others tends to go together with sex-positivity. But if you're an evangelical Christian, the opposite is true, so insofar prudes watch porn at all, it'll be more popular if it's kinky in a way that their peers would disapprove of.
Exactly. What a classic example of a paranoid conspiracy theory. People will postulate no end of ridiculous assumptions, so long as they get to think it’s all about them.
Bit of self-promotion here (please delete if it's too spammy!) but I wrote a post comparing forecast accuracy for PredictIt, FiveThirtyEight, and Manifold Markets, looking at their predictions for the US midterm elections:
If people are interested I'll keep doing these types of analyses for future election cycles, as well as for other types of prediction markets and forecasts (like sports betting, weather forecasts, etc).
Regarding diminishing marginal returns to charitable dollars, I think scott is clearly right, but SBF explicitly said the oppisite on the 80,000 hours podcast earlier this year.
"If you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore…"
The least comfortable bit of all this for me personally, apart from losing some trivial amount of cash I had parked in FTX (which I would have probably gambled away anyway had it not been stolen, so I'm OK with it) is watching a huge crowd of EA people I had previously respected making arguments that mostly seem to boil down to 'doing crimes for our benefit is bad because it might be bad PR for us and thus not actually benefitting us'. Honourable mention to EY who seems to have grokked that hurting people isn't good, even if you're much more clever than they are and thus they deserve to have their money taken and redistributed for the greater good. The whole thing just makes me want to stand on a roof and scream out the gods of the copybook headings until I fall off it.
I mean, dude, the central thing that utilitarians or consequentialists believe is "hurting people can be good." Like, the very first thing that anyone learns about this philosophy is the trolley problem, in which their answer is "you should murder a guy (so that five guys don't die)."
If you're a utilitarian or consequentialist who has a hard rule that says, "You may not hurt people, no matter what," congratulations, you're not a utilitarian, you're a deontologist.
I don't think anyone actually works this way. For example, child vaccination is non-consensually causing pain to a child for the greater good. You need a coherent theory of when to do this vs. not do this, which I think rule utilitarianism provides.
I'm just saying, when John Roxton upthread says, "hurting people isn't good, even if you're much more clever than they are," that statement is rejecting utilitarianism. The entire central core of utilitarianism is "I believe that someone can be clever enough to hurt someone and have it be good." If you don't believe that, you aren't utilitarian.
Obviously you don't need to think that this particular event is good -- but if you reject it by stating the principle that "hurting people is universally bad," you're rejecting utilitarianism. If you reject it by appealing to the specific balance of good and evil involved in this situation, then you're doing so from a utilitarian framework.
I think you are wrong about what utilitarianism is or isn't. I have an old and extremely embarrassing Consequentialism FAQ that I wrote when I was in my early 20s and much stupider - it's not actually that good, but it's probably the best explanation I can provide. See https://web.archive.org/web/20161115073538/http://raikoth.net/consequentialism.html
I think I'm right about what utilitarianism is and isn't (though I stated it informally and starkly, and if we were going to quibble around the edges of the definition you could probably find a few ways to manage to have a not-actually-real-but-coherent utilitarian philosophy that forbade actually hurting anyone). And I also think that this isn't even remotely controversial. Every utilitarian ever has engaged with the trolley problem from the perspective of "I can think of toy examples where hurting people is okay."
(I also don't think that this is like "the big problem with utilitarianism," or anything -- I agree with you that people in general, no matter what their philosophy, don't hold an inflexible rule of "nobody should ever be hurt.")
To briefly engage with your FAQ: yes, weak rule utilitarianism is, as you point out in the FAQ, a heuristic. Strong rule utilitarianism is, I think, not actually a philosophy that any meaningful number of people hold, and is basically deontology.
I think part of the right answer to "why shouldn't you do bad things for the greater good" *is* because those things so rarely end up leading to the greater good, and "PR disaster" is an easy generic example of why that should be.
I think Gods of the Copybook Headings is important because those are principles passed down by cultural evolution, and the reason cultural evolution gave us those principles was because fit societies survived, and the reason societies that didn't do bad things for the greater good (which theoretically sounds great for a society!) were less fit was, indeed, because they kept blowing up in various bad ways. So I don't think there's a conflict between the "it will cause such-and-such bad consequences, here are some examples" perspective and the "it's just evil, don't do it" perspective.
Yeah. Take the organ theft thought experiment, for instance. It's trivial to reject it on the ground that it will keep people from going to the hospital if they're anything but terminal, and even if claimed to be kept secret, you should not accept that this will be successful. Only when phrased as "no, you have to accept the premise that perfect secrecy will be maintained" does it even become a challenge, and then you're so far away from any real-world situation that it doesn't matter except as an intellectual exercise.
"you're so far away from any real-world situation that it doesn't matter except as an intellectual exercise"
Yeah, that's true, in which case it doesn't matter if you pick the option "to hell with these five guys, let the trolley squish them, I wanna hear the screams" because it's nowhere near what you would do in the real world, and if the person proposing it tries to argue you into "but you should do the greatest good", then it has to be applicable to real life. Since it's highly unlikely any of us will be standing by a switch on a track, when a runaway trolley comes down the line, *and* there is one person tied to the track on this side *and* five people tied to the track on the other side, then it has no real world application and you may as well let your inner Snidely Whiplash free.
The surgeon one is a decent plot for a B-movie horror of the old school, but real life? Don't be silly.
The trolley problem with an actual trolley doesn't happen in the real world, but the problem "I have to hurt some people to help some more people" happens all the time. *Ordinary taxation* is such a trolley problem.
Or much maligned "death panels", which really are a thing, unavoidable one at that. The big issue with trolley problems is that they assume total certainty about outcomes and no second order consequences, and those features definitely are very far away from real world situations.
The "PR disaster" point of view is exactly what led the Catholic church to its current disaster of a situation concerning sexual abuse scandals. Because when you are in the PR disaster mindset the solution is not to not do bad things, it's to keep them secret.
The best option is not to do bad things, but once the bad things have been done, the more you are focused on "keep the bad things from causing a scandal," the less energy you have for "keep the bad things from happening again."
Yes, agreed. If something is true it will be true whichever way you reach it.
I guess my worry is more that... suppose SBF had got away with it, this time. Suppose that he made that massive gamble [edit: using embezzled client funds], it paid off, and then he missed a dose of his Parkinsons meds or whatever he was on and woke up and thought 'oh shit that was mental we'd better not do that again', and he carefully put all the money back where it should be, spent his massive surplus on mosquito nets and *no-one ever found out* and he never did it again. (sidenote: how many times did this or something close to it happen in reality?)
In this scenario, did he do anything wrong? I think this is where we will start to see the approaches diverge.
I think the answer is something like: if it was just a massive gamble that didn't break any laws or ethical injunctions, like me spending my own money on the lottery, and it was +EV, then he did nothing wrong.
Since he did break the law and ethical injunctions, I think even if it counterfactually turned out +EV in the end, he did do something wrong, and that thing was to break the law and ethical injunctions. Maybe in the counterfactual we might judge him less harshly, for the same reason we judge attempted murder as less bad than murder, but we should at least judge him a little. Again, see my post https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
>even if it counterfactually turned out +EV in the end
That's not how EV works. EV is the average of outcomes of all possible futures multiplied by their probability, which is estimated before making the decision, it's what the 'expected' part means. As you argue, in such situations it's overwhelmingly likely to be very negative.
It's possible that a -EV decision ends up being profitable in our universe, like deciding to buy a lottery ticket that ends up winning, but that doesn't retroactively make it a good one.
Yes, I didn't make it clear but in my imaginary scenario above he was doing what he was (allegedly? do we still have to say that?) doing in reality, ie making massive high leveraged gambles with other people's money without their knowledge or consent and in violation of his own ToS as well as what I imagine to be several laws (fraud is fraud even if you aren't regulated, sorry).
I know this will sound silly but I'm really relieved to hear you call it out as wrong; it would have been a pretty brutal prior adjustment otherwise. Thank you for taking the time, and all the best with your bed-gibbering.
Maybe this is really dumb, but I am just dying to figure it out.
Why were all of these grants dependent on FTX? I don't understand how grants can be given out that depend on the stability of FTX. Were grantees not given actual cash? I've never received grant money so I don't know how it actually works. But like, what was FTXFF *actually* doing? Where did the money come from? I would have assumed it was just cash that came out of the revenues of FTX. Again, I'm sure this is super naive and ignorant, but I just want to understand... I would have thought that the collapse of FTX meant that no *future* grants would be awarded, not that all existing grants are bunk.
To expand on Scott said, if you receive large sums of money "for charity" for somebody shortly before they go bankrupt in a massive multi-billion dollar fraud scandal, the bankruptcy lawyers will be curious how you managed to come out ahead.
Not legal advice and very fact dependent but it may be the case that people who already received, and in some cases spent, money may need to get some of it back to be distributed to other creditors of FTX. Similar things happened to people who, luckily or not, timed Madoff correctly and realized large gains from him. More detail in this link, also linked by Scott above
In case you meant another angle, at this level charities raise Grants to try to do things i.e. run a study, buy and distribute 10k malaria nets and similar. If they were going to just pocket the cash to spend on indeterminate charity then the grantors would (generally) rather just keep it to themselves spend on indeterminate charity later.
I think it's something like: FTXFF has some amount of money in its treasury. They promise a grant to so-and-so in the form of let's say four installments of $10,000 each. Someone gets the first installment and is planning on three more. FTX goes kaput. FTXFF freezes its treasury because they're concerned about "clawbacks" in the bankruptcy proceedings. Also, the entire team of FTXFF quit in protest so even if they had the money there would be nobody to give it out.
If it's right that FTXFF has the money (i.e. fiat currency) in its treasury, but has frozen it, then the trustees ought not to have resigned, but should instead have remained in post to attempt to get that money to the grantees. Somebody will need to brief counsel for FTXFF if orders are sought against it in bankruptcy proceedings.
What I suspect may be the case is that either FTXFF doesn't have the money at all, but was making grants in the faith that FTX or related parties would provide money in future, or FTXFF has "money" in the sense of holding now worthless tokens. Either of those scenarios would seem to raise a question about why USD amounts were being committed, although I'm not familiar with the norms around grant-making foundations.
What confuses me on this is that everyone seems to have been planning to spend money immediately, and assuming it would reliably arrive in the future. Which is a risky assumption to make even if FTX was entirely on the level, as they work in a notoriously volatile industry, or could just change their minds at any point and withdraw funding. Surely the thing to do is save money from donations to spread out over time and mitigate risk? And generally try and have as diverse sources of funding as possible
Charitable funding comes in all sorts of different varieties, which include lump sum one-time payments, grants paid out in installment, "general support" dollars to be spent on all the operations of the organization that gets the money, and project-specific grants that are dedicated to (and may be the entire funding for) the staffing and other costs of a specific project.
It's pretty common, if an organization gets a grant promising 5 payments over five years to fund a specific project, that they would hire employees, rent office space, and otherwise plan to spend those promised future payments, such that their failure to arrive would be painful and disruptive.
Right there with you on the FTX situation. Trying to figure out what was my cognitive error in so strongly presuming that this one particular dude wouldn’t make this one particular obviously wrong series of decisions. The best I can do is, we all have blind spots for people who are sufficiently similar to us in background/culture/etc. We map those real similarities out to presumed similarities in other areas like risk preferences, proclivities to steal customer money, and so forth, even when we should know they don’t hold.
Essentially my model of SBF was, “what would I have done if I were SBF?” But you know there’s a reason I was never in a position to be SBF and that reason directly relates to the stuff that happened subsequently and so the differences should probably be more dispositive than the similarities at that point.
"Right there with you on the FTX situation. Trying to figure out what was my cognitive error in so strongly presuming that this one particular dude wouldn’t make this one particular obviously wrong series of decisions."
I am not trying to pile on, but ... wasn't FTX offering 8% interest rate return on held assets? This was lower than Celsius, but still high enough that the money clearly wasn't being held in a 'savings account', right?
My working assumption is that pretty much every crypto exchange that offers returns like this is running some sort of scam because how else do you generate those returns? And I make the same assumption about the non-crypto folks who advertise these sorts of returns on the radio.
Doesn't this sort of return (with the implied lack of risk ... it isn't like they tell you they are 'investing' your deposits in stock shares) fit the 'it it sounds too good to be true ...' maxim?
8% on the first $10k isn’t like “this must be a ponzi” type returns, that to me just implies that they are staking in defi protocols — not a zero risk undertaking but very different from running a scam.
(And yes I am embarrassed to know the words “staking in defi protocols”)
I don't agree with a lot of EA priorities, I think people in poly relationships and people under 30 are reasonably poor choices for trust in general and with zillions of dollars in particular, and I was simultaneously leary of crypto and jealous of these huge returns...
But man, this sucks. I am so sorry for everyone from dodgy investors to innocent and well meaning charity types, and I am going to esp remember in my prayers those deluded ratbastards who drove it into the ground.
Is there some way someone could set up a link or something where I can donate money to collapsing charities without e-mailing Scott? I don't have enough money to bother Scott with but I'd like to help out.
I'm not here to dance on anyone's bones, and to all the people hiding under beds gibbering, have a "There, there" reassuring pat on the back.
That said, I do want to make one comment, which is "this is the reason EA should stay the chocolate fudge cake *away* from politics":
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America. You, a random AI researcher who tried Googling “who are these people and why are they giving me money” before accepting a $5,000 FTX grant, don’t need to feel guilty for not singlehandedly blowing the lid off this conspiracy. This is true even if a bunch of pundits who fawned over FTX on its way up have pivoted to posting screenshots of every sketchy thing they ever did and saying “Look at all the red flags!”
Yeah. Exactly. Bankman-Fried and at least a couple others were well-connected via their families, so they had an in with places like the SEC when it came to "Oh, that's my old college pal So-and-so's kid, I'm sure they're just fine!" I know Bankman-Fried was not EA exclusively, but he did want to Do Good amongst other things, and making a shit-ton of money off crypto (pardon the crudity) was one way of doing that, and high-risk was the way to make a shit-ton of money fast; the higher the risk, the higher the reward.
I won't pick on the Dems for having taken money off him, as Republicans would equally have trousered the cash had his political leanings permitted him to donate to them. That's the problem that I wish EA or EA-aligned people who want to be politically active would consider more: yes, they will take your money. And yes, in return they will (appear to) listen to you, and take your calls, and you will think you are doing great and getting ahead on having the people in government becoming aware of The Issues, and meanwhile they will just be ticking off "list of meetings with donors today" in their diaries and it means nothing more than "keeping the cash-cows happy and mooing".
Stick to bed nets. Heck, even AI risk. Stay away from the swamp.
Yes, absolutely politicians will fake interest in your issues to keep the money tap flowing. But do you think that they are somehow above also changing the law the way you want to keep the money tap flowing?
Now, I personally don't endorse spending large amounts on politics - but that's because I think you can get the same impact for a lot cheaper.
Not sure what you mean. Are you saying the political funding played a role in this disaster, or just predicting that it wouldn't have worked even if FTX had done well?
My position is: I agree that when SBF calls up politicians and talks to them about pandemics or whatever, they're not having deep emotional engagement with his arguments. But I think there are like fifty issues that politicians aren't having deep emotional engagement with, and part of (the most cynical interpretation of) their jobs is throwing bones to big donors, so if SBF did this enough times eventually they would say "sure, have some pandemic preparedness" and spend some of their political capital on doing what he wanted so that he kept donating. I don't think that's incompatible with "they just want to keep cash cows happy".
My guess is that SBF was especially into this because he came from a family of Democratic fundraisers, and that the rest of EA won't put as many resources into it.
"Are you saying the political funding played a role in this disaster, or just predicting that it wouldn't have worked even if FTX had done well?"
The latter. EA as a movement seems to be drifting away from its roots or initial premises. Maybe it was always like this, and we are just now seeing it, but there is a big difference between "let's evaluate the best use of charitable giving to see what really does return the most value and positive changes" to "let's try to get a guy elected and donate money to politicians just like any other lobbying group".
"(P)art of (the most cynical interpretation of) their jobs is throwing bones to big donors, so if SBF did this enough times eventually they would say "sure, have some pandemic preparedness"
Yeah, you'd get it. I don't know if the US has quangos, but that's what you'd get:
Committees to prepare studies on possible papers to construct policies for consideration by a parliamentary sub-grouping. Lots of featherbedding, lots of opportunities for patronage by political parties to hand out appointments as rewards to party hacks and long term servants.
Tell me again, how is the Green New Deal doing and the new Green Economy and all those jobs that were going to be clean, high-earning, replacements for the rust belt?
You will certainly get bones thrown for pandemic preparedness. I'm sure Anthony Fauci wouldn't mind taking on another directorship. And hey, we could even add in Alexandros and his ivermectin and alternatives recommendations to make sure we cover all the bases!
A couple of warehouses stocked up with masks and PPE equipment, vaccines, the works!
Yeah, you'd get pandemic preparedness and the politicians showing the nice paperwork to the donors to keep them happy. But would this be any actual use?
yes and it worked very well, ftx was the last exchange people thought would blow up, and the level of criminality is huge, on the level of Bernie Madoff.
No one in crypto had a closer relationship to Gary Gensler than SBF and Caroline Ellison.
So people who saw some red flags in the last few months probably (and reasonably so) said: well clearly FTX and SEC have a close relationship, the SEC probably looked at that "red flag" and saw that it was all good.
Weird random question, one of my ribs is pushed almost 1cm back, and seems to be disconnected, or partially disconnected from the sternum. Been to a GP twice already, and they basically said that nothing can be done.
It is kind of a constant irritation though, so anyone has experience with this? Should I push for a specialist consultation? Or is this really something that cannot be fixed.
Something like that happened to one of my lowest ribs. It felt like it was pushed inwards and slightly up, catching on the rib just above it. Eventually it went away.
IIRC, the things that helped were serious stretching and yoga, aimed at torso flexibility.
If it's bothering you, I'd try to see a specialist. He may well give you the same answer, but at least he is more likely to have an in-depth understanding than your GP.
Don't know about ribs specifically, but in general the threshold for going in and doing surgery close to vital organs is much higher than people think. Every time you do that, there's serious risk of damaging nerves, having unexpected bleeding, infection, etc. So your doctor might be correct in their assessment.
If it bothers you due to psychological distress (body disphoria), you really just need to get used to it. But if it's actually painful, or you suspect it's getting worse, I would try to get a specialist to look at it.
Someone who got his nerves permanently messed up and lost the use of his limbs (and, as a result, his job) because he followed a doctor's advice to get surgery:
I don't see how this is relevant to IJW's question about whether he should get surgery on that rib. Of course there exist people who are permanently messed up because they followed a doctor's advice to get surgery. There also exist people who are permanently messed up because they did not follow the doctor's advice to get surgery -- and people permanently messed up because the doctor should have suggested surgery but did not. And?
The relevant case would be the last one, and I'm not aware of many cases of that. I do know from the RAND health insurance experiment that people given more money for it will consume more medical care, but will not have better health outcomes.
I'm not a doctor, a nurse, or even a home health aide! Here's my intuition, though. Seems kind of important to have that rib connected to the sternum. Your heart's under there! Also, broken ribs can sometimes puncture lungs. If this one breaks all the way off from the sternum, mightn't it puncture something in the area? Also, seems like bones would be easier to work with than soft tissue. Why can't a doctor put in a little metal plate or something that reattaches rib to the sternum? Seems like everything you need to reach is right there on the surface, under a thin later of flesh, so pretty easy to get at.
HOWEVER, that's all just my intuition. Would be good to ask the GP why there's no cause for concern, and why nothing can be done. Unless the answer is very convincing, go see a specialist.
I've had broken ribs twice and yeah, basically you just have to wait them out. For the first week or so be extra cognizant of any difficulty you may have breathing, looking into signs that the loose bone may have done damage to internal organs. It sucks, I wish you a speedy recovery!
No I have had this for years now. It hurts only sometimes, and mildly. But it is annoying. I think it has solidified somewhat and I have honestly no idea how it happened. I never felt a sharp pain there. But one rib is clearly pushed in about 7-8mm.
Hmm yeah if it's uncomfortable then I'd go see a specialist. My ribcage is still asymmetrical 10+ years after the first broken rib, so that's definitely not gonna fix itself.
I too have had two broken ribs from falls taken while trail running. If it hurts like crazy when you deeply inhale, it is probably fractured. The problem is that some rib fractures don't show up well in imaging. In any case my physicians bound my chest once (it's kind of a corset) and the second time I requested no binding, as the binding didn't serve any useful function I could see. At least that's my memory of events.
Following the blowup, I've been constantly wondering how much overlap it had with ACX. But the answer seems to be "waaaay more than I expected" which is wild.
I’ve recently started a free Substack blog on the psychology of belief, called Beliefology, and I’d be grateful for any feedback on my latest post. I’m particularly interested in whether people think that the reasoning presented in the post is a) clear, and b) sound. I’d also be interested in what people think about the writing.
Oh dear spirits of all the departed drunken journalists who ever boozed their way into the grave, this is the kind of publicity EA does not need. And this is why mocking the normies, even amongst yourselves, is a bad look. Don't do it in public and never write anything down. Stick to making quips at the kind of parties Scott has described for us.
"Here's what I think are some ~cute boy things~
- low risk aversion"
Well Caroline, your 'hot bad boy' low risk averted yourself, himself and several others into possible jail time (well not really likely but a Bahamian luxury compound worth's of trouble anyhoo).
I realise this was a young woman being silly online, but this is why, children, you never shitpost under your real name! You have no idea what is coming down the track!
There's going to be a lot of gloating about dumb stuff the people involved said. A *lot*.
"There's going to be a lot of gloating about dumb stuff the people involved said. A *lot*"
And all of it will be shabby tabloid-culture piling on.
There's no end of things to scourge these people for, but having lots of casual sex in various conformations and doing nootropics are completely irrelevant to their transgressions.
She didn't use her real name, it was an anonymous Tumblr and someone doxxed her.
I think if you are a random Jane Street trader deciding how to invest, it's right to be risk-neutral at least up to the size of some large fraction of the treasury of Jane Street. I think once you become big enough that there are externalities and you become a significant fraction of all the money in the world, that breaks down. I would have thought Caroline (including the version of Caroline who wrote that post) knew that, which is part of why I'm waiting to see what the explanation here is.
That's the trouble, there is always doxxing. We none of us know what is going to come back to haunt us down the line, unless we're so insignificant that it will never be more than "do you really want your mother to read that?" levels of potential embarrassment.
But even 'jokes' about how smarter and more efficient etc. you are on drugs than those poor dumb normies aren't really funny and do show a level of immaturity or arrogance that we now see got her into trouble: "well if Sam says it's okay, then I'll do it, because Sam is so smart".
"I would have thought Caroline (including the version of Caroline who wrote that post) knew that, which is part of why I'm waiting to see what the explanation here is."
Are you serious? From posts like those? Scott... you may have to seriously consider whether you have a severe gullibility problem. I'm not trying to dunk on you here. Between this and your post about believing people who say crazy unverifiable shit, you seem to be bad at a core skill of human interaction.
I think his reference to having being friends with a member of the community who moved abroad is pretty clear. They read and cross-posted each other's stuff.
Also, I suppose none of it will get much of a fair read in "this timeline" to borrow Scott's lens, but Caroline's writing is compelling and smart. It looks like her blog has been taken down, which is probably also smart on her part. But I think she's an intelligent person and I'd be much more interested in her memoir someday than Sam's.
But since she's smart, she probably will also listen to lawyers and lay low from here on out. And hopefully enjoy some non-internet based hobbies and craft.
It's the "why do girls like bad boys?" thing all over again, and whether it's "why do you like this drug-dealing petty criminal loser over the nice respectable guy", then it's "he has low risk aversion and that is so attractive".
Drug dealing petty criminal or guy who got in way over his head with billions of other people's money, no it's not big and it's not clever and grow up girls.
Given the results of her trading style it doesn't seem like much changed in terms of a trading thesis, except a new active participation in criminal activity.
Maybe this is the way to tie in those "normal human vices" Scott mentioned. If SBF was making massive undercollateralized loans of customer funds to impress a girl, well, that's at the very least *understandable*.
It does make me update towards workplace relationships being a much worse thing than I previously thought. I had always thought no-fraternization policies were just authoritarian garbage meant to stamp out any hint of humanity in the business environment, but now I see their purpose.
Okay, pushing back against this, the only argument anyone has for meth is a tweet of Caroline's saying she feels much better on amphetamines. The obvious explanation for this (given that she tweeted it) is that she's on Adderall for ADHD (or "for ADHD").
Also, although Caroline has experimented with poly at different points in her life, I don't see where people are getting the polycule sex cult thing. There was one article saying that "all the higher-ups were having sex with each other", but that seems consistent with SBF having sex with his girlfriend Caroline, male-executive-A having sex with his girlfriend female-executive-B, and so on. All the actual descriptions of relationships I've seen are between two people.
I have no evidence there *wasn't* meth and sex cults, I just don't see any evidence that there *was*.
It very probably wasn't anything as lurid as rumour is making it out to be, but when there is a big juicy scandal that the general public can get their teeth into, it's even better if sex 'n' drugs (rock'n'roll optional) are involved.
If you're talking about "these people were living the billionaire lifestyle on stolen cash", then the Schadenfreude of righteous indignation is not as fully enjoyable if the reports are "and then they all went to bed at ten p.m. after having their cocoa and got up early and had a healthy breakfast". If you're going to live the billionaire lifestyle, whether or not you are going to be involved in dodgy deals, why not go for full-on decadence rather than bland boring beige conformity? Part of the deal of having super-rich elites, be they emperors or billionaires, is that they provide spectacle and colour for the enjoyment of the commoners!
The Good Rich Man
by G. K. Chesterton
Mr. Mandragon the Millionaire, he wouldn't have wine or wife,
He couldn't endure complexity; he lived the simple life.
He ordered his lunch by megaphone in manly, simple tones,
And used all his motors for canvassing voters, and twenty telephones;
Besides a dandy little machine,
Cunning and neat as ever was seen
With a hundred pulleys and cranks between,
Made of metal and kept quite clean,
To hoist him out of his healthful bed on every day of his life,
And wash him and brush him, and shave him and dress him to live the Simple Life.
Mr. Mandragon was most refined and quietly, neatly dressed,
Say all the American newspapers that know refinement best;
Neat and quiet the hair and hat, and the coat quiet and neat.
A trouser worn upon either leg, while boots adorn the feet;
And not, as any one might expect,
A Tiger Skin, all striped and flecked,
And a Peacock Hat with the tail erect,
A scarlet tunic with sunflowers decked,
That might have had a more marked effect,
And pleased the pride of a weaker man that yearned for wine or wife;
But fame and the flagon, for Mr. Mandragon obscured the Simple Life.
Mr. Mandragon the Millionaire, I am happy to say, is dead;
He enjoyed a quiet funeral in a crematorium shed,
And he lies there fluffy and soft and grey, and certainly quite refined,
When he might have rotted to flowers and fruit with Adam and all mankind,
Or been eaten by wolves athirst for blood,
Or burnt on a big tall pyre of wood,
In a towering flame, as a heathen should,
Or even sat with us here at food,
Merrily taking twopenny ale and cheese with a pocket-knife;
But these were luxuries not for him who went for the Simple Life.
Given that we have Caroline explicitly saying that she is sexually attracted to boys with low risk aversion, it seems reasonable to assume that there is some sort of causal relationship between the sex and the terrible financial decisions that bankrupted the company.
Add in the regular amphetamine use, and you're one methyl group away from calling the operation a "meth-fueled sex cult".
I am not a biologist, but "one methyl group" seems like it would make a pretty big difference to me. But even if some members of that company used legally prescribed Desoxyn instead of Adderall, "meth-fueled sex cult" kinda imply orgies under the influence of methamphetamine or some such.
I think it is reasonable to hold potentially slanderous claims to a higher standard of evidence.
I have little regard for "crypto" (meaning crypto currencies) in general, but the claims imply for me:
a) some group X is a cult
b) the group uses sex in rituals, as rewards or some such
(a cult whose members have a sex life does not make it a sex cult, or about every cult would be one)
c) the cultish activities, sexual or otherwise, are depending on or enhanced by methamphetamine.
My understanding is that the difference in effect between meth and regular d-amphetamines is mostly a combination of dosage and method of administration.
I'm saying d-amphetamines I'm particularly because they generally have more of a dopamine effect than l-amphetemines, while the l-amphetemines produce most of the norepinephrine effects. From a recreational abuse standpoint, you want more dopamine (pleasurable high, burst of energy, etc) and less norepinephrine (which tends to be deeply unpleasant past a certain level). Benzedrine or Evekeo are 50/50 mixes of the two isomers, Adderall is a 3:1 mix of d- and l-, and Dexedrine is pure d-. Recreational meth is typically mostly d-methamphetamine, while l-meth is sold over the counter as a nasal decongestant inhaler.
That said, d-amphetamine (Dexedrine) and d-methylphenidate (focalin) are both regularly used as prescription meds. For that matter, meth is legally produced and prescribed in pill form for the same sorts of things (mostly ADHD or narcolepsy) that the others are prescribed for. And by most accounts, the effects are pretty similar.
One key difference in meth-as-street-srug is that it's usually snorted or smoked, which gives you a big rush of drug in your system all at once very quickly after taking it. If you swallow a pill, it needs to go through your stomach first and get absorbed in your intestines, which slows and spreads.out the dose in your system. Conversely, d-amphetamine is also sold as Vyvanse, where it's bonded to a lysine amino acid that needs to be metabolized off of it before the drug does anything, spreading and delaying the peak blood concentration still further: this is done specifically to reduce the abuse potential.
The other big difference is dosage. Typical therapeutic dosage of meth is 5-25mg daily, often divided into morning and afternoon doses. A quick googling suggests that a typical recreational dose of meth is more than an order of magnitude higher, around 200 mg in a single dose.
"Attracted to people with low risk aversion" is just a nerdy way of saying something like "bold, daring men".
If we learned that Jill Biden had written in her diary that she was attracted to bold, daring men, and then Biden took some decisive action like the chip sanctions on China, would we describe the White House as a sex cult?
Scott, can we describe the White House as anything *but* a sex cult? After what Bill got up to in the Oval Office, is there anything that has not been reduced to "powerful men like getting their end away, and impressionable women like servicing powerful men"?
>"Attracted to people with low risk aversion" is just a nerdy way of saying something like "bold, daring men".
I would accept this argument were it not for the fact that almost everyone involved is a current or former quant trader with a rationality blog. There really does seem to be a cult of low risk aversion in the mathematical decision theory sense among FTX/Alameda executives, not just the bold, daring action sense.
We have to explain the fact that a bunch of intelligent, well-meaning people ended up St. Petersburg paradoxing themselves into incredibly stupid and evil decisions. There has to be something else going on, and polyamorous status jockeying (https://i.redd.it/454qxctl7pz91.jpg) fits what little we know like a glove.
Yes, a lot of it is speculation (the doctor they had on retainer exclusively to subscribe meds is just an unproven rumour but apparently a common thing among trading firms), but these people seem to have been abusing drugs daily for years, unless daily consumption stims and sleeping pills is not considered abuse: https://twitter.com/SBF_FTX/status/1173351344159117312?s=20&t=LczwmgHHEFOQy7Cd5GFdnA
Again, I don't want to defend them but I think there is a difference between "used Adderall as prescribed, albeit by a sketchy doctor" and "on meth", and it's not fair to replace one with the other to make something sound more lurid.
That's fair, and my initial comment was obviously not completely serious, I guess Yudkowskys shenanigans were on my mind as well when I wrote it. The last week has been a huge disaster for crypto and everyone involved, it will be a long time until all the truth comes out.
Not going to watch a 43 minute video, but yes I agree with the conclusion. Peterson uses the religious language, but he is not religious in the sense that religious people would recognize. He mostly uses religion as a metaphor.
I share those concerns, but I'm not here because I "support" the Substack. I'm here because it's read by a lot of smart and potentially influential people who pride themselves on their intellectual openness but have disappointingly shallow understandings of ideas that matter deeply to me, such as Kantian ethics and feminism, so there's a decent expected return on articulating basic defenses of these ideas.
It's quite possible that, like many Internet spaces before it, this community is on a self-reinforcing slide towards the alt-right as people like you get driven away and replaced by increasingly open reactionaries. But even if so, there's a long period in this pattern where it's worthwhile for decent people to resist the process, refuse to be bullied away, and impede alt-right radicalization by calling out obvious strawmanning of progressive ideas (without getting baited into pointless debates); imo ACX's professed open-mindedness and no-downvotes interface make this effort worthwhile for at least a while longer.
What do you think about the self-reinforcing slide toward the woke left in every corporate, academic, and cultural sphere, as people like me are driven away and replaced by increasingly open SJWs? What do you think about the bullying, the woke-left radicalization, and the strawmanning of conservative ideas in literally the rest of society? It's great that you're here and contributing to the intellectual diversity of this forum, but what I want to say is that the disdain toward woke ideology doesn't come from nowhere. It comes from exactly the things you're complaining about, just done by the other side, the side that happens to have hegemonic power over society right now.
I similarly believe that both conservative ideas and left-leaning discursive spaces benefit when intelligent conservatives leave their comfortable safe spaces and sally forth to politely defend their ideas, call out strawmen, and generally keep progressives from wandering out of our mottes. In the left-leaning political groups I belong to, I'm always nagging my comrades to give moderate and conservative viewpoints their due.
I do think that "woke left" ideas dominate in academia in part because they're more correct, and therefore more fit to survive in a highly competitive memetic environment where people compete for professional status by mustering empirical and theoretical arguments to overturn others' theses and defend their own, but there's certainly an element of groupthink as well. This leads dumbed-down versions of these ideas to spread among young people as their average exposure to higher education increases, which in turn leads corporations chasing young customers and talent to espouse capitalism-compatible bowdlerizations of these ideas (e.g. DEI instead of Black Marxism). There are lots of opportunities at every step of this process for smart, articulate conservatives to check excesses, and I think this would be much better intellectual citizenship, and better tactics, than retreating to alt-right fora where you can sneer about woke strawmen in peace.
Why would intelligent conservatives do any of that when the rationally expected result is to be dogpiled with political and personal vitriol, possibly extending back into meatspace if one is insufficiently anonymous, and our actual defense of our ideas is ignored in favor of easy, entertaining strawmen?
The number of places that aren't clearly "conservative safe spaces", but where the above is not the rationally expected result of defending conservative ideas, is tiny. This is one of them. I can't think of any others offhand.
Maybe out of a selfless and manly commitment to the common weal, or disdain for the kind of snowflake who lets their fear of being called bad names keep them from exchanging ideas with those who disagree with them? I'm given to understand that conservatives value these things.
But I’d also urge you to consider that, just as liberal snowflakes misinterpret legitimate criticism of their ideas as “violence” and refuse to engage in reasonably constructive spaces like ACX, conservatives might have created a comforting caricature of academic speech norms as an excuse for preemptively refusing to engage with critics they subconsciously fear. Consider this philosophy paper by a natural law theorist arguing that eating meat is ethical:
You can then decide for yourself whether this is a good-faith and rational response to conservative ideas, or a mere dogpile of political and personal vitriol that ignores the arguments made.
"In the left-leaning political groups I belong to, I'm always nagging my comrades to give moderate and conservative viewpoints their due."
Not that long ago, this used to be me. The name calling, strawmanning, complete unwillingness to see nuance, and often censorship were what convinced me, more than anything else, that conservative ideas had merit. The woke left worry that letting people hear conservative ideas would immediately convert them, but I was actually convinced by the censorship to give the underdog a chance, to stick it to the authoritarians who want to tell grown adults what they can or cannot read, hear, or say. I admit that's not very rational, but sympathy for the underdog used to be a left-wing instinct.
"I do think that "woke left" ideas dominate in academia in part because they're more correct, and therefore more fit to survive in a highly competitive memetic environment where people compete for professional status by mustering empirical and theoretical arguments to overturn others' theses and defend their own"
I'm in academia, and I can tell you that this never happens when it comes to SJ issues. 95% of the time, discussions of SJ issues are echo chamber where every idea is leftist to the hilt, and no idea is ever challenged. If you do raise a conservative opinion, even one that's obviously and demonstrably correct (e.g. "this hate crime wasn't actually a hate crime, according to the police and every news outlet, including the most left wing"), you get immediately dogpiled, accused of creating an unsafe work environment, and sometimes reported to HR. Campaigns will start to kick you out of the university, and if another institution offers you a job, to get that institution to retract the offer. People will ask your officemates whether they're OK--as if working in the same room as a conservative is a physical hazard. None of this is hypothetical. I've seen all of it with my own eyes. If you want a highly competitive memetic environment where people compete for status by mustering arguments to overturn others' theses and defend their own, and those theses have anything to do with SJ, you're more likely to find it in the neighborhood bar than anywhere near academia.
Funny you should mention—just the other day I facilitated a discussion in my left-leaning academic group about how we should respond to the officer-involved death of a trans fellow student at our university, in what the family claims but the authorities deny was a bias-motivated killing. Different views were expressed and heard out respectfully (including that the authorities’ account may be true or partially true!), compromises were proposed, the thorny issue of gender-neutral language in Spanish was raised and elegantly addressed, votes were taken, a rather moderate compromise statement was issued, and nobody was dogpiled or cancelled.
I was fairly proud of the process and result, as it does take a certain amount of tact and delicacy to avoid upsetting people when discussing sensitive topics like the violent death of a classmate, and sometimes even some tedious throat-clearing of the “everyone here can agree on the equal humanity of trans people, I just want to be sure we keep in mind that” variety. But I certainly wouldn’t give myself credit for pulling off a twenty-to-one miracle—these discussions are totally routine in our group and on our campus.
If you really can't voice a conservative opinion without upsetting people in the ways you describe, I'd encourage you to consider whether you might be wording things in an unnecessarily provocative way that makes people think you're more interested in iconoclasm than in seriously seeking truth.
"If you really can't voice a conservative opinion without upsetting people in the ways you describe, I'd encourage you to consider whether you might be wording things in an unnecessarily provocative way that makes people think you're more interested in iconoclasm than in seriously seeking truth."
I'd encourage you to consider that your experiences may not be typical. I'm glad that you were able to have a successful discussion where different points of view were represented, but I'm telling you that the academic environments I encounter are ruled by an oppressive orthodoxy. Rather than take my experiences seriously, you immediately jumped to victim blaming: "what did you do to provoke them?" (You didn't even blame the right victim, because in most of the cases I was an observer and not a participant.) Of course, it *is* always possible to cower in fear more and tiptoe more around others' sensibilities. But there's clearly a double standard here, because which part of terms like "microaggressions", "white privilege", "dead white men", "cultural genocide", or "whiteness" are not unnecessarily provocative? Labelling conservative speech as violence--that's not provocative? How about tearing down statues of Lincoln or Washington--that's not provocative? The whole basis of wokism is unnecessary provocation, mixed in with explicit discrimination against disfavored groups to boot.
Lastly, I'd suggest considering human nature. Even the purest, noblest ideology leads to authoritarianism when given hegemonic power. Jesus forgave the prostitute, turned the other cheek, and said the meek will inherit the earth; his ideology was used to justify burning heretics at the stake and killing millions of people in religious wars. Now, I'd be lying if I thought woke ideology was pure and noble, because censorship and discrimination based on race and sex have been repulsive to me since I was a kid. But even if it *was* pure and noble, human nature is not, and ever since wokism has gained hegemonic power, it has been used for power, greed, and status. Don't be so sure that you won't be the next victim. After all, most of the victims of cancel culture have been left wing, because most conservatives live in conservative bubbles where they're immune to cancellation.
I'm pretty sure that Bitcoin's original purpose was to evade oversight to enable various crimes - including buying and selling illegal merchandise (drugs mainly) and obviously tax evasion.
I think the likely intend of the early bitcoin crowd (before people started using it as an investment) was to facilitate transactions without legal restrictions.
Depending on the laws, legal restrictions may or may not be reasonable. Many sorts of transactions are prohibited in some places: slave trading (certainly bad), buying toxins (probably bad), buying drugs (debatable), gambling (debatable), donating money to designated terrorist organizations (such as Daesh or Wikileaks), prostitution (I guess cash is easier than bitcoin there), buying medicine (probably net positive), buying contraceptives (good) or banned books or movies (probably easier to get using the tools commonly used to circumvent copyright than paying btc).
Like with the onion router network (which provides anonymous communication (or is supposed to anyhow, who can say how many nodes are run by the NSA)) or even encryption the question comes down to how you see the state. If you model a state as a generally benevolent entity which knows best and only prohibits stuff for good reasons, all such tech solutions are tools for evil criminals. If you either view your state as oppressive or are concerned about it becoming so in the future, your view will be quite different.
(While TOR is certainly used by dissidents and such, I concede that I have not heard many heartwarming stories about people buying contraceptives or sex education books using bitcoin.)
For the purpose of transactions, the long term price development seems of little relevance as neither party needs to hold onto the bitcoins very long.
Regarding Argentus claim that crypto give non-criminals a way to invest in the crime sector, I see two plausible interpretations. One could claim that investing in an asset used for tax evasion, thereby driving up the price (and in turn profiting from the demand created by tax evaders) is equivalent to investing in the tax evasion crime sector. The same claim could possibly made for other investments like buildings, art or gold, though.
Or one could claim that the various cryptocurrencies and tokens are pyramid schemes used to defraud investors. Claiming that the early investors who jump ship in time were investors in crime while the ones who stuck with it until the rug pull were victims seems a bit arbitrary to me, though.
Because while the absurdity heuristic and the related offensiveness heuristic are poor ways to determine what is true, they are really good ways for people to coordinate meanness.
There is a spectrum of positions that would fall under the term "human biodiversity". In the weakest sense, it says races have different average characteristics, which is obviously true (height, skin color, eye color, hair texture, etc). Then you have those who extend this to differences in average intelligence, temperament, or other mental characteristics. Then you have those who claim that those differences are significant in determining group outcomes. Then you have varying degrees of views that claim that makes some races somehow "lesser" than others. This last set of views is what I would call "scientific racism", and is indeed a terrible view to hold, but I am quite certain Scott does not hold it. I believe he falls somewhere in the middle two, though I'm not sure where. Importantly, believing in differences in group averages does not mean you believe those averages should be applied to individuals. That East Asians are shorter on average than Europeans is well-established, but no reasonable person would say that Yao Ming shouldn't be in the NBA because of this.
It sounds to me like you're calling at least the last 3 things HBD to include Scott, then turning around and calling only the last one HBD to impugn Scott as believing it. Or maybe you do have such a black and white view of the world that you think any belief in different average mental characteristics is no different from wanting to round up black people into camps. Either way, you do not seem to be acting in good faith. I may be wrong and you will actually engage with this comment. I would welcome that. But I can't help but suspect you are mainly trying to stir up trouble.
As a scientist, clearly race relates to intelligence. It is impossible that any two distributions even of people of the same race will have an identical mean(average). Accordingly, there is some shift in the average.
Also, as a scientist, I am well aware that that are both smarter and dumber people than me in all of those distributions regardless of race. Accordingly, I don't really care how big the offset in the mean is. That one race is less intelligent than another is obviously valid. Spending your time arguing that it is true is petty and pointless.
Leaking emails like this is not a good look either; neither is signal-boosting them. And the "horrible revenge" comment is obviously a joke.
That said, I don't see the mails as evidence that Scott is a horrible racist; rather, that he's willing to admit (even only in private) a very inconvent, but very likely true, fact.
If it's not evidence of racism, it *is* evidence that you should temper your trust of Scott when it comes to... well, any NRx-favored positions, inversely to how much you trust him to have a superlative ability to "filter out the garbage" - especially given his revealed priors.
The actual text is that he thinks that some of what neoreactionaries say are "nuggets of gold", i.e. extremely valuable, and says he can filter out the garbage. Whether you, the reader, trust that Scott can do that filtering as well as he thinks without unconsciously adopting their garbage is kind of the whole takeaway.
The text also literally shows several examples of what he believes in or supports as a result of that filtering, giving you evidence to how good his filters are (or are not).
I think that's actually very charitable, if you think you can trust Scott's bullshit detector. The only reason to call it uncharitable is if you already think Scott has failed in the credulity department, at least in this category.
I don't trust Scott - or anyone else, for that matter - to filter any of my beliefs. Neither does any other mature, intelligent adult. The only thing I trust Scott to do is to examine interesting ideas with entertaining rhetoric. I don't have to trust his bullshit detectors because I have my own. This isn't a cult, it's a group of smart people who like good writing. It's a place for the reasoned debate of contentious topics. If you can't hear an idea without adopting it uncritically, then probably this isn't the place for you.
It sounds like you're triggered because Scott apparently finds merit in an idea that offends you. Why should anyone else care about your reaction? This is a community that values coming to common understandings of objective truths. If you disagree with the idea, then debate the idea. That's how a common understanding is generated. Maybe you're right, maybe you're wrong. Maybe both sides have something to learn. But the process isn't helped at all by insulting someone for holding an opinion that you don't. This community doesn't play political gotchya and it doesn't ostracize people for wrongthink.
The fact that Scott is able to find some merit in an ideology that he generally disagrees with only bolsters the notion that he's able to evaluate ideas on their own merit. I think it's to his credit, and your reaction doesn't reflect well on your ability to do the same.
Now, if you'd like to debate the ideology in question in a reasonable way, be my guest. But if you're just interested in insulting people for disagreeing with you, then this isn't the place for you.
It appears as though this comment is responding to something other than the comment it is a reply to. I'm not sure what part of my comment you're saying is an "insult", but your comment certainly seems to be full of barbs directed at me.
But this gets at one of my pet peeves, so sure, I'll take the bait.
>I don't trust Scott - or anyone else, for that matter - to filter any of my beliefs.
I'll be blunt, this is bullshit. This is a claim I hear a lot, and I've never been convinced by it. Everyone's internal models of the world are implicitly based on what other people say, because there's no other way to do it.
In order to accumulate knowledge, you need to (a) gather it from first-principals observation or (b) get it from someone else. And the vast, vast majority of it is going to be (b), because no one has the time or even physical ability to do otherwise.
Sure, you might not accept everything uncritically from everyone. But for many people, you're going to trust most of what they say is true, because you genuinely don't have the time to fact check everything that everyone says. It would be inefficient, and frankly *irrational* to do otherwise.
So unless you can honestly claim you've never acted on or shared-as-true a single piece of information you haven't personally fact checked from multiple sources (and if you did claim that, I'd call you a liar) you're going to end up incorporating things that other people say at face value. Especially so for people you trust to have intelligent, researched positions.
And in general, Scott *should* be one of those people that you incorporate at a high trust-value. He does a lot of work to make things legible, and is generally good at doing it with minimal bias.
>The only thing I trust Scott to do is to examine interesting ideas with entertaining rhetoric.
Cool, but that's not all Scott does. His articles are frequently in the form of persuasive essays trying to convince the reader he is correct. Hell, one of the most recent posts was literally a voting guide for the election, based on Scott's research opinions. (One where he says he leans "liberal to libertarian", and says that the closer you are to that the more valuable you'll find it; the degree to which he is actively hiding NRx influence is *very* relevant here)
Regardless of whether you think so, Scott frequently presents his opinion as trustworthy because he's done sufficient research from sources he believes are credible. (And once again, much of the time this is true! Certainly compared to most places.)
>I don't have to trust his bullshit detectors because I have my own.
Cool, part of the process of calibrating your bullshit detectors is determining how much you should trust others. I'm merely stating auxiliary calibration info.
>If you can't hear an idea without adopting it uncritically, then probably this isn't the place for you.
Yeah, it's not me I'm worried about. There's no "critical information literacy" test required to read Scott's blog, and he's an intelligent person trusted and recommended by many other intelligent people.
>It sounds like you're triggered
Cliche'd misuse of "triggered", minus one Quirrell point.
>This is a community that values coming to common understandings of objective truths.
A substantial part of the past two weeks has literally been debating subjective experiences, keep up.
> If you disagree with the idea, then debate the idea.
Scott has explicitly asked people to avoid debating HBD in his comment section before, so I'm not going to start on that. Suffice it to say, I and many others consider NRx beliefs on the matter to *not* be objective truths, certainly not to the extent that we'd say they're "right" about it.
>But if you're just interested in insulting people for disagreeing with you, then this isn't the place for you.
The amount of words your comment spends directing insults at me because I disagree with you, just to end with that, deserves an award in irony.
I think the underlying idea here is that people "unconsciously adopt garbage". It sounds like basic epistemic humility, but it's ill-founded: too vague, very different from "vulnerable to this list of biases and fallacies," and a fully generalizable thought-terminator that obliterates the ability to engage with anything.
Moreover, a writer that openly worries about that seems to be signaling that they doesn't trust their own judgement.
I don't see how you can conclude that since HBD is an empirical claim while racism is a philosophical one. While the two positions can overlap rhetorically, one certainly isn't a logical consequence of the other. It isn't at all inconsistent to say both "average IQ differs between racial groups" and "all racial groups have equal moral and legal value." To claim otherwise is, in fact, a racist position: it makes the moral equality of racial groups contingent on empirical facts that may turn out to be false. If you can only accept someone as your moral equal if they're as smart as you ... well, I wouldn't call that a morally defensible position.
Jealousy applies to more than just sexual jealousy, and it's fairly clear Brennan was jealous of Scott, hence the nasty little email leak. After all, nobody was writing NYT articles about *him*, good bad or indifferent!
As much as I hate the plastic straw bans in general (and personally think they're the result of fallacious reasoning[1]), I think the problem is not that plastic straws are superior, but rather that most eating establishments pick the absolute cheapest and therefore shittiest paper straws. There are higher quality paper straws that don't suck (or rather do suck in the literal sense, unlike the cheap ones after five minutes), and other non-paper options, and we should be shaming any business that cheaps out so aggressively. It's also a supply-side problem - plastic straws had decades to get the unit cost down to pennies, we need straw manufacturers to come up with something decent and affordable that doesn't contribute to worsening the environmental plastic crisis.
[1] My personal conspiracy theory is that most of these bans are based on studies of "plastic recovered from beaches" (which I have seen on several websites campaigning for them), in which case you have a kind of survivor bias - straws are a relatively large piece of plastic, and much easier to spot than smaller bits of plastic waste, but smaller than the most obvious waste (like bottles) that would be collected before more detailed cleanup. Therefore it's over-represented in those studies, in a form of survival bias.
I wanna say the answer is plastic straw sit-ins; get a bunch of people together to fill a restaurant, and they all bring plastic straws. Or go the other way; get a bunch of people together and go full Airplane!, just splash all the liquids near your face, then melodramatically lament "if ONLY there were a better way!" Everyone in unison; you'll probably have to practice. (Probably bring a mop too, that'd be a pain for staff to clean.)
Maybe you could get everyone to bring small PVC pipes and use those as protest straws. Pinch one end of the straw shut and argue that means it's not a straw anymore, it's a cone. Get your dirtiest coat and pass out plastic straws in back alleys.
It used to be done by pointing out that paper costs significantly more energy to make than plastic, which is true, and that paper mills usually have more environmental impact than plastics factories, which is also true.
The plastics industry tried this many years ago, when people were suspicious of the change from sturdy paper grocery bags to cheap plastic grocery bags that readily tore and dumped your carefully packed cans of frozen OJ, cauliflower, and perfectly ripe peaches all over the freaking grocery store parking lot, raising one's blood pressure 20 pts in an instant. It was easy to persuade the stores, because plastic bags were cheaper and far easier to store and transport, but shoppers were pretty resistant. The ol' "Paper or plastic?" refrain which those of an age will recognize as the punchline of many jokes, was an attempt to do a little prodding to see if the next victim...er...hapless customer could be persuade to take one for Team Gaia, if not Team Albertson's/Kroger/Safeway Bottom Line.
Never worked, though. Nobody could ever bring themselves to believe that plastic was better for the environment than paper, inasmuch as paper is made of trees and what could be more natural[1]?
So the paper just inexorably disappeared, and to its credit the plastic got a smidge better, and we all just got used to accumulating all these fucking flimsy plastic bags of no use whatsoever. (The traditional fate of paper grocery bags was to cover next year's textbooks, be cut up for arts and crafts, be used to store quantities of paperbacks being donated to the library When I Get Around To It So Stop Bugging Me OK? and lots of other things I have now forgotten.)
-------------------
[1] Soylent Green bags, of course, but we haven't quite got that far yet.
I prefer *bagging* into paper bags, because the rigidity helps prevent products from flopping all over the place*...plastic bags work great for spherical cows, less good for anything with angles. They also tend to have less cubic capacity just in general. But for second-lifing, plastic makes a superiour trashcan liner. Can't remember the last time I ever actually bought separate garbage bags. (Paper bags are OK for compost, as long as nothing's too wet anyway...) But of course I know most of those plastic bags are gonna wrongly end up in a recycling bin, because bad heuristics, whereas the paper ones could end up in any bin and it's not an issue. Tradeoffs abound, as in all things.
WRT government mandates: the regressive "bag fee" tax in SF is 25 cents. (Waived for people paying with foodstamps, because #equity.) I guess the goal is to...punish people for using environmentally-optimal disposable paper or plastic bags, and coerce them into buying aspirationally-reusable ones? Which need anywhere between a dozen and a few hundred uses to be equivalently green? And that's why we sell them in a wide variety of fun collectable colours and patterns and seasonal designs? I'm too lazy to actually complain to my local legislator about this Obvious Nonsense, but finally got to the frustrated point of at least never bag taxing anyone who comes through my line. The best defense against unjust laws: putting enforcement in the hands of people with no incentive to do so.
* (The grocery chain I work for does paper by default; we didn't even offer plastic at all until Christmas a few years back, when there was a recycled-paper shortage, so then whoops shitty long-term contract with some janky plastic bag manufacturer that makes godawful weird-staped ones. Hence why our plastic bags permanently have Christmas designs.)
Handles are definitely a big selling point, and yep you're right the plastic has improved considerably -- although now the gummint mandates the store charge me $0.10 each for them, so I carry a string bags like a clochard.
Ah, but you see the government *has* to force these decisions on you, for you are a bumbling child who would slay Gaia in the stupendous depth of your ignorant cupidity, except at election time, when so long as you vote for more of the same instruction from your betters, you are the very font of reason and righteousness.
Always found it curious how some businesses went the Cass/Sunstein "nudge" route of mild stick punishment for "buying" a bag, whereas others went the route of mild carrot reward of a "discount" for BYOB (of any type). Taxes and fees seem much more common, because gubmint loves its political-winner free revenues. But I wonder if it's actually more effective at the stated goal of discouraging disposable bag usage.
I'm pretty sure this is just nutpicking, plain and simple. There are something like a hundred million Republican voters in the United States, and thousands of Republican politicians; finding a few who are foolish enough to advance such a proposition is meaningless. There may be threats to our democracy going forward, but this isn't one of them. It isn't necessary that we drive the number of people expressing such foolishness to literally zero. And it isn't persuasive to argue that since [other party] has a few loose nuts, [other party] must be driven into the outer darkness.
Significant to whom? Like, are these relevant party actors?
I have heard about "Qualified Voting" from both sides of the aisle in several countries, but unless there is a real effort backed by credible people, I'm not going to make broad moral condemnations
Counterpoint: we are increasing the age at which people are allowed to do a lot of other things. You have to be 21 to buy alcohol, cigarettes, buy a handgun, be an airline pilot (technically, get an airline transport pilot certificate), do interstate trucking, etc.
You also need to be at least 19 to get a Coast Guard captain's license.
I'm sure that I've missed a good number of other important items.
And with the standardization of post-secondary education to enter the labor force, people at 18 are also unlikely to be able to be self-sufficient.
Yup. If your brain is too underdeveloped to safely buy a gun or alcohol, why should that brain be able to choose someone to do violence on their behalf?
You certainly don't have fewer rights as a young person -- indeed, the state if anything prosecutes crimes against youth with greater vigor and attention than crimes against adults. But your responsibilities are circumscribed, and your rights exercised on your behalf by someone older, usually your parents.
The thesis is that you don't have sufficient judgment to exercise responsibilities...well, responsibly. So we say you cannot sign binding contracts until you're 18, and the responsibility to get educated, to take care of your health, and not be a social pain in the ass are exercised on your behalf by your parents. Almost all of this is designed to protect you against the folly of your own stupid decisions. Voting is no different: we want to protect you against the consequences of voting for the Pied Piper or Wicked Stepmother with the poisoned apple.
It's also true we want to protect *ourselves* against the consequences of your stupid decisions, where it could affect us (e.g. voting), but this is the secondary concern -- the primary reason we limit child responsibility is to protect the child.
We dislike varying ages for marking the transition from purely practical considerations. As you point out, the actual age of maturity can vary widely between individuals. I know 15 year olds I would trust to drive, fly a plane, or vote, and 25 year olds I would not trust to walk the dog. But we can't be doing tests on everyone, lacking the cheap Maturity-O-Meter the county clerk can just point at any individual to take a reading. So we set arbitrary age deadlines that work on average, and that's good enough. Indeed, we still argue over whether the deadline even on average ought to be 18 or 19 or 21 or 25 or something else -- e.g. there used to be, at least, states where you could get a driving license at 14, because kids on the farm needed to be able to drive pickups around.
The conservative position is friendlier to restricting youth franchise simply because youths tend to vote for experiment, and conservatism by definition does not think experiment is as valuable as progresssivism. Nor does conservatism think that participation in government is as important to the individual as does progressivism, because conservatism doesn't value collective actiivities as highly, and doesn't think government should be as significant a presence in life anyway.
A the most vocal part of the GOP right now is people who built their political careers on acting like victims and the media (on both sides) loves to cover these people. Its hard for me to understand how much this actually reflects the feelings of republican voters as a whole. Likely very few of them care or have heard of this kind of thing given that most voters (of all parties) are much less educated on specific candidates than we would expect or want.
Oh come. Let us not be naive, or transparently tribal. When was the last time any politician, of any stripe, sat down after a loss and said "gee, I guess I need to become more like my opponents" instead of "God damn it, we wuz robbed by [insert random conspiracy/bad luck story here]! We just need to double down on what we were doing before, it will surely work next time..."
Since the ol' pendulum reliably swings, this theory has the virtue that if you wait long enough, it will surely work out successfully, and for a politician (or political party) it's a lot easier to just wait a cycle or two (collecting cash from your outraged supporters all the while[1]) then it is to say well gee everybody, I guess everything we stood for was rubbish and we need to do more of what those people we called low-down good for nothing rascals last time were suggesting. Denial is a thing, and it often works well, and is easier than reconsidering your entire philosophy o' life. I mean, there's a pretty impressive level of denial going on right here in this discussion about the nature of the crypto biz. But it's easier to rationalize how it was a one-off, a bad apple, bad luck, et cetera, than going back to square one and saying holey moley maybe I was wrong all the way back there.
When the Democrats lose, they're not any different about reconsidering the current norms to their advantage, while arguing fiercely all the way they're just leveling some playing field or other, or redressing some inequity, totally has nothing to do with any basic primate urge to be on top, e.g.:
Reconsidering the size of the Supreme Court because you're pissed about who got seated lately isn't materially different[2] from reconsidering the Twenty Sixth Amendment because you're pissed about how many 18-21 year-olds voted for your opponents. And, fortunately, they're both equally DOA to anyone with a lick of genuine sense, and so serve mostly as a way for True Believers to indulge in revenge pr0n fantasy.
----------------
[1] A common error is to think political parties exist to acquire power. They certainly love doing that most of all, but they exist to make money and provide jobs for their adherents, and if they can do that in the minority, that's not such a bad thing at all. Indeed, it has certain advantages, since free of the responsibility of actual governing you can be a lot more strident in your righteousness.
[2] I am of course aware that anyone who scored highly on his verbal SATs can provide a closely argued rationalization for why they are like TOTALLY different, and only an imbecile such as myself could fail to see that.
I disagree about the court. Control of the court goes to whoever has the votes and the determination to use them. Once you have both, do what you want. Call it the McConnell rule.
Yeah it is. Extending or contracting the franchise is only one of very many ways to influence the path that power takes from The People to Our Guys In Office. If you single it out as an especially sacred totem to defile, then the a priori most likely reason is because it happens to be your tribe's totem. Other tribes get upset at threats to different totems.
I mean, just for example, the Red Tribe sees red at the notion that people who are not bona-fide citizens who are 100% willing to prove it at the polling place with a Real ID should be allowed to vote. We can probably all agree that *in principal* non-citizens shouldn't vote, right? But notice only the Red Tribe gets really upset that that particular totem might get defiled by, for example, lax vote-by-mail or same-day registration.
Edit: I realize I didn't really try to answer your question as best I could, and given that you were willing to engage I should. So: my best guess as to the underlying origin of this is a sense of enormous frustration coupled with denial. The frustration stems from seeing The People, bless their hearts, be just unable to comprehend the obviousness of the true threats to the Republic, despite being told them over and over again, and experiencing much pain as a direct result of them. Jesus (the thinking goes), the 70s are back with a vengeance, stagflation is destroying lives and the children are turning out stupid and needing therapy -- how much more motivation do you idiots need to Right the ship? (For a good example of the same sense of baffled frustration among the Blue Tribe, consider gun control. Jesus, how many of your own kids have to be shot to death, accidentally or on purpose, before you morons clue in?)
The denial part is being unable or unwilling to look seriously for what reasons might impel a reasonable man, who differs from you in only modest ways, to disagree so remarkably on which are the true threats, and which are the better ways to ameliorate them.
I think both come from the 50/50 state of affairs. If one were always in the 75% majority, there would be no frustration. If one were always in a 25% minority, there could be no denial. It's being...right...on...the borderline of being able to feel yourself in general agreement with the nation that drives both miseries.
So people reach for drastic remedies (because of the great frustration), but remedies that are simple-mindedly tribal (because of the denial). There is neither a respect for traditional norms (because of the frustration), nor a willingness to look for more nuanced solutions that might require mutual adjustment all around (because of the denial).
If your major point is to express disgust at the level to which some serious number of people are willing to indulge their narcissistic fantasies, above a hard-eyed evaluation of the real world, I agree with you 100%. It is a sad betrayal of the magnificent edifice of self-determination and individual liberty our ancestors left us (at great cost to themselves). We have lost a great deal of the ingredient *we* have to supply -- which is self-discipline.
I'm just observing somewhat snappishly that the Red Tribe holds no monopoly on major chunks of it acting like self-indulgent twits who take the social contract dangerously for granted.
I take those calls to raise the voting age to 25 as seriously as I do the calls to lower the voting age to 16 or lower. (This lot want to bring it down to 12 and they're *philosophers*, we should all be impressed right?)
Which is: a bunch of interested parties think that by doing so they can turn out more voters for their side. The Democrats are just as bad on this as the Republicans, and it's not uniquely American; I've seen similar calls in my own country (how much this is the usual suspects just copying the Yanks and how much it is 'young people are progressive, they'll vote Us in to do progressive stuff' I can't be bothered parsing).
"But the fact that this is a thing at all in a significant minority of the Republican party is quite baffling"
Oh gosh, those naughty Republicans! Being supportive of the idea of changing the voting age!
Now, you can quibble about "yes, but these are not the *party*", but they're just as involved in political exhortation as politicians so that's hair-splitting to my mind (if you're boasting about getting "We helped bring RCV to the 2020 Democratic primaries, where five states used it to select presidential candidates" and talking about your " ever-expanding network of state, local, and national allies" then you're not just plain John and Marjorie Citizen).
There's a fundamental difference between giving people rights and taking them away. Expanding rights may be foolish, but removing them is cruel. This isn't a "both sides" issue
Maybe a better way to look at it is whether a rearrangement of rights is positive-sum, negative-sum, or zero-sum. From a certain point of view the franchise is zero-sum, but I think taking into account more realistic considerations from psychology, political science, etc., expanding the franchise is *usually* positive-sum and restricting it is *usually* negative-sum.
Who says? If you ask me, expanding the franchise is always negative sum. I don't hold with the proposition that all men are equal (although they may well be born that way), and that there is identical value in every man's vote. Once you've got the people who everyone totally agrees should be voting, adding marginal cases is pretty much bound to dilute the average quality of the vote, and you'll descend to bread 'n' circuses, followed by tyranny, the way democracies traditionally do.
Given my druthers, I'd go back to the days when only 1 in 50 people had the right to vote, maybe fewer, and make people compete for it. All-day exams of extraordinary difficulty, being able to speak 3 languages fluently and play the oboe, maybe get through a deadly obstacle course without losing more than one limb, gladiatorial and/or chess combat to decide ties.
If you're going to put ultimate power in the hands of voters, which a republic does, they ought to be the best hands you've got. It's logically absurd to expect the average schmo to be able to consistently elect above-average political leaders. If you want above-average government, you're going to need above-average voters.
There might be some argument to be made for a truly elite vote-of-the-technocracy, but in the US of A we started out with basically all the white male landowners, so it's not like any of the expansions of the franchise were starting out from a position anything like that. I continue to maintain that for *most* distributions of the franchise, including all the ones we've tried so far, expanding it is positive sum by virtue of increasing fairness and inclusion in a system that already can't count on voters' expertise. I mean, the white male old landowners managed to end up with Andrew Jackson, and following that, a slide into civil war. Nothing that bad has happened here since the franchise started seriously expanding!
I'm in favor of lowering the voting age to 16 or even (?!) 12 but I've been in favor of that since I was 16 myself, without interruption, and well before I was voting D. I can remember being disenfranchised at those ages. It mattered. It wasn't a good thing. I really think we should go even younger than 12, and it's not nearly as clear that it will help the Ds at that point.
Well, the 2016 presidential election resulted in Congressmen calling for electors to ignore their state's elections and vote Hillary in anyway (which backfired wonderfully when electors instead refused to vote for Hillary), and lots of internet commentaries about eliminating the electorate and/or the Senate. It's just the nature of politics these days.
Pretty sure it's down to the Internet putting the world in contact with all the worst people, and politicians still mostly being used to the pre-internet days when opinions had to reach critical mass before they got broadcast and archived. Hopefully people will eventually realize how the internet works and future politicians will learn better than to chase the approval of the loudly insane.
Fortunately, calling for voter restrictions is dead on arrival; you'd need a majority to pull anything like that, and people calling for it are only doing so because they don't have the majority. The gates will only ever open wider.
(It would be hilarious if they raised the voting age to 30; you only have to be 25 to run for Congress.)
I agree. I think its pretty weak, intellectually, to see this incident as an indictment of EA. I am not an EA enthusiast and this doesn't change my thinking on the subject. Many were so quick to "blame" EA. Seems like you could just as easily use this to say "people with autism shouldn't run companies". Of course no one would or should say this but its just as valid as the EA criticism!
I wouldn't be surprised if someone said that. It's not even necessarily wrong, if we're willing to broaden the overton window quite a bit, but that's not a conversation I or many other people want to have.
Obviously this doesn't discredit the entire concept of effective altruism, but it does point towards confirmation of a few red flags I've been noticing in the broader EA culture the last few years, including the too quick trust in anyone who displays the right subcultural affiliations, and the overabundant enthusiasm for crypto and financial stuff generally.
The modern EA/rationalist/postrationalist status feels overall too much like a social scene for my taste, tied up with individual personalities and the associations between them, when I feel like the original concept was more about abstract impersonal rules for thinking and acting better, which could have been applied in any society past present or future.
Ironically though, that concept appeals to a specific type of person, so in retrospect I think it was inevitable that a relatively homogenous network of people was going to spring up, and to a large extent that's necessary anyway if you're going to coordinate anything effective. And with that comes the inevitability of at least a few bad actors, no different really from Enron or Theranos or a hundred other business ventures gone wrong, so you may be right that this doesn't actually say much about EA in the big picture.
I think people in EA circles are seeing this a lot more than is really out there. From the mainstream financial and crypto press I am following everything is about crypto and its issues and there is zero mention of EA.
Yes this is definitely true. I probably wouldn't have seen the EA angle except i follow this blog and sam altman on twitter and a few other people who would comment on it.
I don't know the people involved well enough to make any judgements on their moral characters. That they seem to have been Ye Olde Bay Area Rationalist Sub-Culture Types inclines me to the notion that they really *did* believe in EA and all the other do-goodery.
They also had all the faults and flaws of Ye Olde Bay Area Rationalist Sub-Culture types, and I think that is what contributed to the mess. Bankman-Fried got a lot of incense burned before him in online articles and elsewhere, and such flattery is very sweet. Are we surprised he started to believe his own hype and that he couldn't do wrong? That he had the Midas Touch? (That the touch of Midas is a *cautionary* tale seems to pass many people by).
The story of Chu-bu and Sheemish is instructive here, I think:
The price of the Ponzied asset is already a prediction market on its future value, which incorporates the probability that the asset will collapse in some financial scandal.
I like the original quote especially for the way it resonates with the modern understanding of neuroscience, habit formation, dopamine, etc. It reminds me that there are real physical consequences in my physical brain of repeated actions.
This is so important. Transforming a reflective "am I doing a good thing today" into "I'm doing a thing to then do a good thing today" introduces a weighting problem and fuzzes things up really fast.
Layer in a bunch of wordsmithed justifications about maximizing giving and I feel like we need a math formula because things have gotten really indirect.
I'm really concerned that this just straight up doesn't work at scale.
How many people in the Western world predicted at the outset of the war that Russia would lose its fleet though? If there were armchair admirals who knew better they were pretty quiet, and most laypeople took it as a matter of faith that Orientals would be soundly beaten by the Tsar’s Christian military. So doesn’t that support Scott’s argument?
Generally you can't just get into running a navy, so even if you know something Tsar Nicholas doesn't you can just demonstrate this, but it is actually reasonably possible to start up a new hedge fund and have your results eventually speak for themselves. This is a special quality of finance, which is why there's a special efficient markets hypothesis, rather then a general efficient everything hypothesis.
The fact that I let $100 in cash sit in my bank account the last month or so, ROI = 0%, instead of investing it with FTX, ROI << 0, is already prima facie proof that I'm a smarter investor.
When other people are racking up $millions in losses, you don't have to build a +$10 billion business to prove you're smarter, you just need to not fuck up as badly. Pretty low bar to clear.
Yes, it is a fully generalizable argument to all criticism of authority. I've also heard people claim that EA never claimed to have special insight into how to run charitable organizations. Lots of defensiveness.
If you can outpredict the market on which companies will increase/decrease in value, you can accumulate outsized returns with no additional resources needed.
If you are a better general, what equivalent action can you do to get an advantage? I don't think there is one.
This seems like a genuinely important distinction. If someone claims to have really known what was going to happen to FTX beforehand, I would be most persuaded by them showing me how much moneh they made by shorting it.
There is no reason to believe someone who knew FTX was probably fraudulent would have made money off its collapse. This is because the ideal point for shorting the stock is not when FTX became a fraud but when the fraud is uncovered. I freely admit I do not have a great skill at predicting when the regulators or general public will become aware of frauds. This is an entirely distinct skill from determining what is or isn't fraudulent.
I agree that timing considerations could have made it harder to make money off of this.
It's still true that it is an opportunity to make money, and so if someone claims to have had this information but did not short FTX we should update towards them not being as confident in the past as they claim. This is especially true since if you were really certain about fraud, you could cite the evidence that made you worried publicly, and thereby kick off investigation.
There isn't an equivalent opportunity for criticizing military expertise, so conflating them seems innacurate.
I just explained why shorting behavior is a poor proxy. Shorting requires you not to know it's a fraud but to predict the specific time of collapse. If you short and the stock collapses but on the wrong day you can still lose money. Knowing something is a fraud does not imply knowing when it will be discovered.
I agree with the idea that it's fine to take shady people's money to do good things; I get really really *really* annoyed at the "Trusted Entity is secretly funded by Evil Entity!" brand of outrage-journalism.
The key differentiating factor in this case is that it turns out it wasn't really *his* money. And that was...not entirely unforeseeable.
I normally have some trouble explaining the heuristic I use to decide that someone is untrustworthy, but in this case it's super easy: every crypto project is a scam, everyone in the crypto space is trying to con "investors" out of their money, everything they say or do is at least partly in service of that goal, and anyone who controls other people's crypto assets is embezzling them. If you frequent crypto spaces - not just now after this scandal, but at any time since Mt. Gox went down in 2014 - people will literally tell you over and over that you should use these heuristics. They're astonishingly open about it.
(I have some theories about why crypto culture is like that, but I think they're a bit much for an open thread comment.)
Perhaps a good intuition pump: these are by and large people who loudly proclaim that banks and existing financial systems are fraudulent. When they set up their own banks and financial systems... They by and large are likely to feel that all their competition is fraudulent. This is a really sticky trap to engage in fraud.
Obviously not every cryptocurrency person is motivated like this or behaves like this.
I actually think the causality is mostly in the opposite direction for the major figures in crypto culture (typical mind fallacy: they think everyone else is defrauding everyone all the time because they want to defraud everyone all the time) but I'm sure there's a feedback loop, plus some disinhibiting effects of normalizing bad behaviour and blaming victims.
And you're right: not everyone in crypto is like that. (I'm not like that, although you shouldn't believe me.) I'm just pretty sure that for any decision with real financial/legal/physical consequences, it's best to assume that anyone who wants you to exchange money for crypto or to turn over control of your coins is trying to steal all your money, destroy your reputation, and implicate you in multiple felonies.
But the problem is that someone can reword all those cautions to be about "what I'm doing is not unethical, it's high risk but it's justified by these paradigms because the projected return is so great and the utility will be so maximised!"
Nobody ever thinks their schemes are going to go bust, they expect to make big money. Like betting systems that won't fail, or 'if you lose at the racetrack, keep betting so you will get back the money you lost'. If Bankman-Fried had been merely a common fraudster, he would have had some plan in place so he could have scooted before more people realised the dominoes were tumbling down.
Even among high risk options, there surely is a separation between ethical and unethical ones. Buying a big stake in ethereum hoping it goes up certainly seems ethical, though not particularly wise.
This is just deontology, isn't it? How do you define unethical in a consequentialist system without reference to the consequences? If the consequences are referenced and the arbiter of whether an action is moral then how do you not end up with the ends justifying the means? These aren't impossible questions. But "every individual step must be moral" is just deontology, where the morality of the actions matter more than the consequences.
In rule-consequentialism, you don't refer to the consequences of a specific action, but you do refer to the aggregate consequences of consistently following a proposed rule.
(Depending on the version, you may refer to the aggregate consequences of *everyone* following the same proposed rule, at which point the line between rule-consequentialism and deontological ethics starts to get very blurry and lose most practical relevance.)
Morality in rule-consequentialism means consistently following your established rules even when you believe that breaking them would produce a better outcome in a particular instance. This is still a consequentialist attitude, just at a higher level of abstraction and a longer time scale.
Rules consequentialism does not involve "acting ethically in all cases" though. That's deontology. For example, rules consequentialism could have an unethical rule that has positive social effects. If every action must itself be ethical that's deontology, isn't it?
If consistently following a rule has net-positive consequences, then the rule is ethical by consequentialist standards.
If you hold all your rules to deontological standards, then yeah, you're kind of doing deontology(ish), but that's a bit tautological.
And there's a lot more to deontology than "every action must itself be ethical." That description applies just as well to act-consequentialism, most versions of common-sense ethics, and divine command ethics. The distinguishing features of deontology are (1) the specific algorithm for determining what action is ethical (https://en.m.wikipedia.org/wiki/Categorical_imperative) and (2) the focus on the reason behind the action.
(In deontological ethics, an action is only ethical if you do it *because* it's ethical, out of a sense of duty. Doing the right thing for the wrong reason is immoral, and concern for the consequences is the wrong reason . So it's impossible for a consequentialist to *actually* practice deontological ethics even if they always pick the same action a deontologist would. And even picking the same actions is only plausible if they adopt one of the universalizing forms of rule-consequentialism, which are unusual.)
> And there's a lot more to deontology than "every action must itself be ethical." That description applies just as well to act-consequentialism, most versions of common-sense ethics, and divine command ethics
No, it really doesn't. Your definition of act-consequentialism doesn't say every action must itself be ethical but that the results of every action must be ethical if you imagine it applied universally. Deontology says that there are moral rules that directly inform actions regardless of consequences. Divine command ethics say that morality is known from divine will which is similar to deontology but distinct in that divinity can change the rules or be vague in ways deontologists disagree with.
> The distinguishing features of deontology are (1) the specific algorithm for determining what action is ethical (https://en.m.wikipedia.org/wiki/Categorical_imperative) and (2) the focus on the reason behind the action.
The categorical imperative is basically your definition of act-consequentialism though. The rule is literally "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." The reason it crosses over into deontology is because of its insistence each action must itself be moral rather than just the results. It's also a specific attempt to fuse consequentialism into a more deontological framework so it's a bad example of core deontology.
>In deontological ethics, an action is only ethical if you do it *because* it's ethical, out of a sense of duty. Doing the right thing for the wrong reason is immoral, and concern for the consequences is the wrong reason.
Yeah, no. You have a weird definition of deontology that isn't widely held or in the link you posted. Deontology doesn't care about intent. Deontology is about the application of moral rules that universally apply at all points. For example, if you believe that stealing is wrong then deontologically stealing from the rich to give to the poor is wrong even if you agree the consequences are good. Act utilitarianism, meanwhile, might say what if everyone steals from the rich and gives to the poor. And then you might end up with an act utilitarian saying: Well, what if we regularly stole from the rich in fixed amounts and gave it to the poor. That would increase net utils while being predictable enough so as to not destroy the wealth we're stealing from. To which the deontologist would just reply: stealing is wrong. It's wrong regardless of how much you mitigate its effects.
This specific distinction is highly relevant form this FTX stuff.
> So it's impossible for a consequentialist to *actually* practice deontological ethics even if they always pick the same action a deontologist would.
This implies there's no meaningful difference between consequentialism and deontology except in the minds of people. Which isn't true or what most people think.
The person you describe as knowing sounds awfully like Caroline Ellison, in which case, let me say that I refuse to believe that she could have acted on bad faith until I am given overwhelming evidence to the contrary. On the contrary, the impression I got is of a true believer, and a good person. This does not preclude the possibility that, under circumstances of a certain naiveté and inexperience in a field as murky as crypto, she might have let herself go along with what she might have perceived as temporary and 'bad' expedient means. But to believe this person ever intended to purposely and maliciously scam people our of their money or be privy to a fraud is, for me, completely out of the question. I believe the best option is to be charitable and await to see what the courts of law have to say once the dust has settled.
Here's the rub. Bankman Fried and his woman look like goblins. Human beings instinctively recoil from goblins. Rationalist utilitarians say 'no, there's no rational reason to recoil from people who look like goblins'. But there is.
Now, it's theoretically possible to construct a version of utilitarianism that would be sufficiently inclusive of both heuristic rules and the dark, dark secrets of HBD and psychology. But the problem is that it would be very complicated and the whole point of utilitarianism is to simplify morality. So, in practice, rationalist utilitarianism always ends up saying some version of 'no, there's no rational reason to recoil from people who look like goblins'. But, to reiterate, there is.
Reading this week’s Douthat column I kind of regret rage canceling my paid subscription to Scott’s Substack. “You won’t get a refund said Substack. No more hidden OTs for you.” “I don’t care!” I replied.
I am curious how Scott will address the appeal for a bit of perhaps less than maximally effective altruism tho.
Curious, why did you rage cancel?i
It was an earlier essay where Scott made the point that replacing millions of Japanese with people from Sub Saharan Africa would change the country’s culture.
He was right of course. If you replace a large portion of descendants of a culture that dates back to antiquity with people unfamiliar with that culture, things are going to change. But why Africans in particular? A substitution of Englishmen or Germans or anyone else would have a big impact on a mature culture.
It seemed like a proxy for White Replacement Theory in the US, something I don’t at all worry about. I could have been reading that wrong of course, but that was how it struck me at the time.
I talked myself down from that ledge eventually but had already canceled my paid subscription. It was futile gesture because Substack had already collected for a year’s subscription. Just one more in a lifetime of futile gestures. I suppose I run a bit hot on some long held beliefs.
I need advice for a friend who is in ungodly amounts of pain. I am thinking about the SSC article on pseudoaddiction- miner who takes opioids for years for horrible mining injuries speaks brusquely to hospital staff, gets his opioids taken away, shoots himself in the chest, miraculously survives, etc. My friend has been taking opioids for 6 years after a horrific car accident, and their doctor is threatening to take them away. What should they do?
Your friend can point out to their physician that the CDC has changed their opioid guidelines on November 4th, 2022 (https://www.cdc.gov/mmwr/volumes/71/rr/rr7103a1.htm). Specifically, they have moved away from titrating or weaning long term opioids especially if they are well tolerated. Sudden cessation of opioids is increasingly identified as patient abandonment and can theoretically be reported to the state board.
I wish I had a better answer for you. We are in a period of reaction regarding opioids as I’m sure you know. My wife had out patient knee replacement surgery a year ago and she had to endure a lecture about how half of West Virginia is on heroin now and they didn’t *start* with heroin. The spiel went along the lines of “Studies have shown that acetaminophen and ibuprofen are just as effective,” at that point I pointed out that she can’t take ibuprofen because of an ulcer but that didn’t slow her down. “… aroma therapy and meditation can be used to control pain too….” It went on and on as they tried to hustle my wife out of bed and send her home. It got to the point where I had to interrupt the nurse and ask “Shouldn’t you be giving this lecture to the Sackler family?” She was wearing a Covid mask but I could see her bite her lower lip.
If bad comes to worse for your friend he might try kratom if is legal where he is. Some forms are supposed to relieve pain in the same manner as opioids. It’s a non optimal solution of course. He’d have know way of knowing for sure what he was ingesting, no government regulation of kratom and the US has been trying to make it’s sale illegal but it sounds like your friend is in a tough spot and might want to consider it.
This raises the question of "why" which I doubt you could supply in any case. Answers including agreeing to stop going in early for repeats (maybe signing a pain management contract stipulating this), asking for a slow taper, perhaps sufficiently so to allow time for referral to another prescriber, showing up at appointments having made some obvious efforts at self-care, etc. that suggest the opiates are actually objectively improving his quality of life (it doesn't sound like they are here).
While those are certainly possibilities, I know for a fact people get arbitrary cut off from years of necessary medication: my mom was one of them. Not only did they never ask for an additional refill, it would be preposterous to assume that someone with extensive damage to their back (three herniated disks) wasn't warranted in asking to receive opoids. They were subjected to forced taper offs; recommended to consider surgery (they did not want to); and at their most vulnerable moments were reduced to lifelessly slouching on a plush leather chair.
It was definitely a formative experience for me. Before then, I worked from the assumption that failure to act (both politically and personally) was the most consequential, the impact of those aforementioned actions weren't first priority to me. That's not to say none of it mattered at all, it just felt abstract to me; it didn't resonate. Part of that probably has to do with my political convictions (on a side note, I don't know if I can mention that on this thread, not sure whether it's odd or even numbered threads politics can be discussed; if I can't I'll edit it out in post).
Now though I believe both aspects are equally as important (and to anyone wondering, my mom eventually did get a suitable replacement; it wasn't opoids unfortunately but they're now able to comfortably function).
https://timothyburke.substack.com/p/academia-discovery-runs-aground-in
I may post this again, but I wanted to get it said. This might be an added explanation for research slowing down, but I don't know whether it's as bad in the sciences as it is in the humanities.
The short version is that it's not just bad at amazon and google, search has become relatively useless at academic sources.
Here's how I got past a cataloguing issue. I'd heard for years that things had gotten better for years for Jews in Germany, especially in the Weimar Republic. I realized there was a story there, but what was it?
Searching on Google didn't help. I was just getting anti-semitic stuff.
I think it was a couple of years, and google changed its policies. Now I was getting stuff *about* Nazis. They're more interesting than gradual legal change.
Finally I asked people. I got pointed at the emancipation of the Jews and a book called The Pity of It All.
The moral of this story is that you may have to ask people because computer search isn't working. It's like being in the middle ages or something where local and specific knowledge is the essential thing.
https://manifold.markets/VivaLaPanda/will-i-have-found-a-website-which-i
This has been a pretty well-known problem in technical circles for years. Do a quick search for "google search" on Hacker News[1], and you get post after post[2] about how search is getting worse. And that doesn't even include the comment threads that pop up on unrelated posts.
It's one of those problems that gets worse the more you know about a topic, because google has trended towards searching for what it thinks you mean, rather than what you actually asked for (which is great for drunk people trying to figure out "that guy from that movie from the thing", less so for finding documentation or if you actually know what you're looking for).
It's also been overtaken by SEO blogspam - low quality, often GPT generated articles that use a 2000 word preamble to answer a question that takes 3 words, or content that is literally scraped and copy-pasted from websites like StackOverflow.
And, of course, there's the fact that you can't find any organic information about *any* product, and you have to append "reddit" to get anything besides adtech vomit in your searches.
[1] https://hn.algolia.com/?q=google+search
[2] Some huge discussions in the past year:
Google Search Is Dying https://news.ycombinator.com/item?id=30347719
Google no longer producing high quality search results in significant categories https://news.ycombinator.com/item?id=29772136
Ask HN: Has Google search become quantitatively worse? https://news.ycombinator.com/item?id=29392702
If people aren't getting great search results anyway, they may consider at least getting private results. The main option for that is DuckDuckGo but it has newer competitors such as Brave Search and Neeva (ecosia is also much more private than Google, although not as private as DDG or Brave Search).
DDG generally works pretty well for me. When it doesn't find what I'm looking for, I'll try Startpage, which usually works. Questions have been raised about the Startpage and privacy, which is why I primarily use DDG, but Startpage could hardly be worse than Google (or Bing).
Notably, DDG (and Brave Search) have "bangs" that allow you to initiate a search in another website, so you can do a Google search through DDG, by appending !g to your search query, for example. That would no longer be private, but it can be more convenient than first navigating to google.com.
Kagi search is worth trying, their free version is pretty limited but it's pretty impressive. It's also using an in-house index, it's not just a frontend for Bing like DDG.
I've also noticed a significant decline in search quality, particularly with Google/Youtube, that's happened fairly continuously over the last 10 or so years. I think that the issue mainly comes from trying too hard to make the algorithm 'smart', and focusing on the wrong things.
For example, on youtube. when I first started making youtube videos in the late 2000s, my videos would get tens of thousands of views. I don't have aspirations of stardom, but I do reasonably think that my audience worldwide, that is, people who would be interested in my content, is that big. In the mid 2010s, i'd get a few hundred views if lucky. Nowadays I'm lucky if I break into the double digits. What happened? Why are the people who want to find my stuff not finding it?
Basically, Youtube decided that it didn't want to neutrally give search results that closely matched what you typed in. It wanted to show you popular things that kind of matched what you typed in. It's not trying to show you the best result; it's trying to find an *excuse* to show you something that *it* wants to show you. This is really bad for several reasons, but the main reason is that if you are looking for a small signal, it will get drowned out every time by the closest large signal. This was not a problem before, because you could tell it to neutrally and unbiasedly just give you things that exactly matched what you put in.
The other problem is that the internet has changed. It used to be a place where weird or forward-thinking people were doing things that interested them. Now it is where most of the world's business is done, and also the de facto public square. And what was once great about youtube for example was that someone just messing around with a camera or talking in an unscripted way about something they liked could get seen and interacted with, without a lot of time wasted on making fancy powerpoint-style presentations. Now it's all people who are vying to make money on youtube, creating overproduced content with fancy studio lights and tight scripting.
Anybody know why DSL is inaccessible?
While on the topic of DSL, is it possible for the admins (cc obermot) to reinstate the ability to ignore threads? This was a useful feature that disappeared without an explanation.
Ok lads, back in the van. Uncle Obormot has glued it all back together again..
We were talking about it in the discord. Cassander pointed out that Naval Gazing is down too, so the problem isn't unique to us.
ACX about to experience what it's like when the weird offshoot branch of the family nobody really talks to anymore suddenly shows up uninvited to the family reunion :)
I thought the same thing
Yeah, I can't access the site either. Maybe wait a couple of hours and see if it clears up.
"Anybody know why DSL is inaccessible?"
I only know DSL as an acronym for Digital Subscriber Line, which is an old technology for getting onto the Internet. If you mean it this way then this will be specific to your ISP (Internet Service Provider). If you mean something else it will help if you expand the acronym.
Sorry - especially because it always bugs me when people use acronyms I'm unfamiliar with.
I mean Datasecretslox, the Bulletin board Scott always mentions at the beginning of his open threads. It's been down for quite a few hours and thats never happened before.
No worries :-)
FYI, the site is up now (for me at least).
The topic at the top of the topic list is: "What happened to DSL?"
So, what's up with media being so nice to SBF? (https://twitter.com/loopifyyy/status/1592944362274816000?s=46&t=QYFASLmu7f_nv9WfJkguFA). What's the underlying cause? Or is the premise false and these articles are cherry-picked?
I mean, the worst offenders, like the NYT, are cherry-picked, but the fact that the with-kid-gloves NYT article came out right after it got leaked that the NYT[1] has had an explicit "no positive coverage for tech" policy is worrying enough on its own.
I saw someone on twitter comment that the flurry of puff pieces and interviews was like indirectly observing the dark matter of some PR agency, and that rings true.
[1] And by extension, probably most media companies, since their ownership charts are incestuous
Let me get this straight. You trust ‘loopify’ on Twitter more than the Financial Times, NYT, Reuters and Bloomberg?
No, but I do trust the aggregate of the many rat- and postrat- adjacent accounts on twitter that were saying the exact same thing. Also, this is the comment section to be defending the NYT in lol. Of your list, the only one I remotely trust not to have a pro-finance bias is Reuters.
Do people think SBf started out intending to run a scam? If not, approximately when did he start running a deliberate scam?
Excuse me if this has been brought up already.
I would be inclined to update to mistrusting people who talk a lot about their own virtue.
I'm pretty sure SBF et al were running their business in an exceedingly casual and reckless manner from the start. And I'm also pretty sure that their attitude towards e.g. securities and banking law was "bunch of unimaginative suits who will just get in the way of our legitimate business; the less they know the better". Which is often technically illegal. But I don't think they crossed the line into unambiguous cheating-our-customers-out-of-what-we-promised-them scamming until the last few months or so, probably about the time Alameda had its liquidity crisis and needed FTX's client funds to bail them out.
And even then, I think they were optimistically hoping that they'd do a bit of quick what-most-of-us-call-scamming, then double their money through super genius expert trading and refill the customer accounts before anybody noticed, then get back to their basically honest but casual and reckless business. Until the next time.
It also wouldn't surprise me if there was a *last* time, when they used their customer accounts to bail themselves out and then *did* make the money back before anybody noticed, but that may be hard to figure out. I think the records of t hat are in the heap under the beanbag chair in the orgy room.
I'm not sure the man can reliably distinguish between a scam and an ordinary trading business, so it might be the case that the distinction about which you're asking is not one he could make, which means whatever his motivations were they need not have changed markedly on crossing a line visible only to others (between 'financial trading biz' and 'scam').
After all, pretty much all trading business is a series of bets, and in each bet there is usually a winner and a loser. I bought a bunch of XOM last year at 35 because I thought that price was definitely going to up, and the fact that it's 113 today means (at least so far) I was right and I'm a winner, but that also means all the people from who I bought the stock last were wrong and are losers. Some of their money has been transferred to me, and not because I earned it by doing work for them, but just because they took the losing side of a bet with me.
We don't consider this a fraud, scam, or theft, because of certain bright lines we draw in our heads about what is a "fair" bet between consenting participants and what is not, e.g. we require the bet to be made with the full knowledge of the owners of the capital at stake, we require nobody to have any unusual not generally available information, we require nobody to be in a position of authority that could influence the outcome of the bet, everyone has to be an adult, et cetera.
But many of these lines are in practice somewhat arbitrary, and why we draw this line versus that, and why here versus there is often a bit fuzzy, with relatively dubious precise justification[1]. Nevertheless, most of us believe there *are* bright lines that separate "fair" from "unfair" or "criminal", even if the actual legal lines aren't drawn precisely in the right place.
There are other people, however, who see the arbitrariness, however modest, and extrapolate from its existence to the general conclusion that *any* bright line is just arbitrary bullshit, and there really isn't any important moral difference between ordinary trading and what some unenlightened dummies would provocatively call "a scam." I think people who end up running scams in what looks like an accidental way, just kind of wandering heedlessly across some bright line or other, are probably often in this category. Because they don't much believe in the existence of bright lines, they don't take ordinary care to stay on one side of them.
People who run Ponzi or Nigerian prince scams from the get-go by contrast probably totally grok bright lines, and presumably decide to just cross into the Neutral Zone deliberately because it will be personally profitable.
----------------
[1] And to be fair, I think sometimes the arbitrariness of the lines can sometimes ensnare genuinely innocent people. I've always been a little dubious that Martha Stewart was guilty of insider trading, although I could be wrong about that of course.
Your logic makes no sense. By your logic, if you ever sell your stock and it continues to rise, then you are a loser. Actually, if there exists any other investment that would have made more money than your purchase of stock, you are a loser.
Clearly this is wrong. The winner or loser status of the person whose stock you purchased must be based on the performance (relative to expectations) of the stock between their purchase and sale and not after they sold.
I could probably have phrased it better by saying that people voluntarily surrended their right to a future stream of income to me for a price that turned out to significantly undervalue the value of that income. How you want to phrase that in accounting terms would probably be a matter of taste, about which I have no strong opinion because I find the question a bit OCD and uninteresting, but I'm pretty sure to most people it would feel like "a loss" and taking the other side certainly feels like a gain to me.
If you mistakenly think your house is infested with termites and must be torn down, and I come along and, taking advantage of your bad judgment, buy your house for 25 cents on the dollar, far below its market value after I have it inspected and prove there are no termites, do you still think value hasn't been transferred from you to me? Same idea.
And some people would say the latter example is a "fair" deal, because you could've had it inspected yourself, and others would say it's "unfair" if, for example, you are actually mentally ill and you think it's infested by Martian termites who can turn themselves invisible to evade inspection -- and therefore I took cruel advantage of a disability. That relates to the larger and more interesting issue, which is that there are bright lines that separate "sharp business deal" from "fraud", but (1) we're in general little fuzzy and arbitrary about where we draw them, and (2) some people of a rigid or antisocial mindset argue the imprecision proves the lines per se are bullshit, and so there *isn't* any important difference between "sharp business deal" and "fraud." Our Hero SBF may be one of them.
The news that keeps coming out on this looks worse and worse. These were absolutely not innocent naive dufuses who stumbled into incredibly serious crimes.
His interview with Vox (and god, what a hilariously bad idea that was) gives me the impression of stupidity rather than malice. Like, just so careless with the exchange that he didn't know or care where the money was going until a bank run forced him to actually sit down and count up where all the money had gone.
He does say some "meh, ethical investing is just a sham" stuff which might make me lean towards an intentional scam, but taking the interview as a whole I'm inclined to read it as an after-the-fact defense - "everyone's a scammer, so what I did wasn't that bad" - rather than an admission that the whole thing was planned from the start.
(Not that I'm trying to defend him. Even his own description of his errors is well into "Sufficiently advanced stupidity is indistinguishable from malice" territory. But I don't think he ever had the clarity to say to himself "this is definitely violating the law but it's going to make me enough money that I don't care.")
Edit: Link to the interview. It's worth a read: https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy
To be overly technical, SBF didn't run a scam. He ran a legitimate business (effectively a currency exchange) and stole from it. I think this answers your question: he started out running a legitimate currency exchange but had such little moral fiber that once a lot of people trusted him with their money he just stole it for his own purposes.
This doesn't seem to be the case based on new statements from the company after the SBF orgy club was removed. This was barely a company at all. They didn't keep records, in fact SBF encouraged them to use auto-delete communication methods, they were fake audited by a fake audit company or something, they don't know who their creditors or debtors are, where their money is, or what their assets and liabilities are. Their subsidiaries were not set up right, either.
Preface: FTX and SBF seem to have almost certainly engaged in fraud. The record keeping seems either atrocious or a deliberately obscuring the real record. That said,
It's standard practice to encourage employees to use less permanent media (or ideally, face to face communication) for any communication that might have any legal bearing. There is training to this effect at all the companies I've worked at. Perhaps this is shady, but it is also pretty normal.
If so I might be mistaken. But I was under the impression it did have a functioning platform which is why people still have money in it and are trying to get it out.
That everything was done in a wrong/criminal/etc way: Yes, for sure.
My guess is that this is a boringly simple case of "person has large amounts of money to invest, person makes bad investment decisions, person loses money, person then makes set of riskier and riskier gambles to try to get back above water, compounding the losses." In this case, with larger numbers than usual because there has been a ton of capital sloshing around trying to find good investment opportunities in the last few years.
I'm guessing it didn't start as a scam, and mostly wasn't one. For whatever reason his hedge fund run into real trouble, and thought up this one clever clever hack using funds from his crypto exchange to save it. And since everything he was doing was for the greater good anyway, he decided not to let oldfashioned rules stop him.
I know the issue of 'use cases' with crypto has been beaten to death ,so I apologize in advance for the redundancy, but I would like to ask the smart people on this chat this: Is crypto the first example of an innovation/commodity for which the garden variety champion cannot explain the 'use case' to the typical rube?
Self disclosure: In this context (and others, without question, but those aren't relevant here) I am the rube. And I have read all manners of interviews with the likes of SBF and the desperately malnourished kid who started Ethereum, and whenever the question of 'so what is it really good for' comes up we get the inevitable 'that's a really good question!' (to all who have been to an academic conference, feel free to laugh with me!) and then a bunch of 'blah blah blah decentralized blah blah blah' and we move on to the next question.
So- I'm not here to argue that crypto 'doesn't' have a use case, because it absolutely might and I can see some distinct paths where it does. But in terms of explaining it to the average guy, I think it's fallen laughably short. Whichmy eye at least is an interesting feature of this commodity, since I can't think of another example of an asset that has this unique property.
Am I wrong? Have there been others? If not, is this an augur for what's to come (i.e., more assets that end up worth more than the GDP of Brazil but that nobody can clearly explain how they will improve our lives in the short-to-medium term)? Or if so, what were they and what happened to them?
I think the "typical rube" can understand money laundering easily enough. The hard part is for champions to make money laundering into something the typical rube will support.
The typical rube *also* understands speculative bubbles in assets of dubious fundamental value, but it's probably a lost cause to try and make that sound good. The money laundering, you can at least point to people trying to flee oppressive regimes or to buy medical marijuana (adderall, whatever) in Red States.
>The hard part is for champions to make money laundering into something the typical rube will support.
Meh, they got their work cut out for them by your local government.
> Is crypto the first example of an innovation/commodity for which the garden variety champion cannot explain the 'use case' to the typical rube?
No. It isn't. It's actually common.
Electricity, to go back almost two hundred years, was also quite mysterious to the average person. And simultaneous with it being used for things that laid the groundwork for modern electric grids you had grifters claiming it was magic. One woman in Paris told people it had magical powers. People would pay to sit near an electric engine. She would have assistants rub water on skin and then shock people for supposed health benefits. Etc
Now, you might be saying that lighting a building up is an obvious benefit. And it is. But it took over sixty years to get from the first economical electric engine to the first electricity lighted home.
This is entirely typical. There were people saying Google was too complex to explain compared to things that more closely resembled index card systems that most people had been trained on. Etc. The idea that innovation is obvious or easily understood comes from media where the genius goes "eureka!" and explains how to solve their problem in simple terms the audience can understand. But it's not how it actually works.
So from the sixty years between the first economical electric engine to the first electricity lighted home, was there a use case for electricity? If so, could it be explained to a rube? I'm not familiar with the history of electricity, but I'm having a hard time thinking of a use case that couldn't be easily explained (e.g. "you know watermills? This lets you grind stuff too, but without water. You can put a 'watermill' anywhere now!")
That's because you're looking backward. What is obvious to you was not obvious to them.
The first electric mill took even longer. The first electric lighthouse took a little less time but not by much. The first commonly comprehensible invention that such electrical generation enabled was telegraphs. And even that took a specialist to understand. And they invented new kinds of fraud such that even now we have statutes on the books such as wire fraud. This was all happening alongside a large number of grifters who claimed electricity could do anything from help you connect with God to predict the future to raise the dead.
You can say, "Ah, but the use of telegraphs is obvious!" Well, it's obvious to you now. It wasn't obvious at the time. The British government repeatedly wholesale rejected the use of the telegraph and it was viewed with significant suspicion at the time. The Royal Navy famously compiled a report where they rejected the innovation as not really adding value. The average rube did not realize the value and it took concerted effort by electrical advocates to drive adoption. (And a significant number of rubes were outright scammed.)
Now, just because electricity was doubted doesn't mean anything that's doubted is valid. That's the same error in reverse. But the fact there's doubt or that non-specialists don't understand it doesn't really mean anything. It's common even for things we think are "obvious" today.
I think I didn't explain myself well. I didn't mean to say that the future uses of electricity were obvious. I meant that, at the time when electric mills, telegraphs, electric lights, etc were in the future, electricity really didn't have many uses. A rube would have been right to be suspicious of anyone selling electricity, because as you said, lots of them were con artists claiming electricity had magical powers and going around shocking people. When electric mills, telegraphs, electric lights, etc *did* get invented, and electricity was no longer just a scam or a curiosity, the use cases *were* easily explainable to rubes.
Let's apply this to crypto. Right now, there are no obvious use cases that are easily explainable to a rube that aren't of debatable value. (The ability for anyone, including criminals, to transfer money without regulation is easily understandable--but also of highly debatable value.) Therefore, rubes are right to conclude that crypto is either a curiosity or a scam. In the future, if and when great use cases for crypto *do* get invented, it'll no longer be just a curiosity or a scam, and the great use cases *will* be easily explainable to rubes.
Now I think I didn't explain myself well either. The telegraph, after it was proven functional and reliable, took decades to drive adoption. The Royal Navy, the most advanced in the world at the time, did a full investigatory report and determined that it was not useful despite acknowledging its literal capabilities. They basically said (as someone said to me yesterday about crypto) faster speeds alone didn't justify the switch. So no, they *weren't* easily explainable or widely accepted. And plenty of scams coexisted with the valid uses. So your application is wrong.
Crypto might be a scam. But the fact the average person is caught by scams or cannot understand it doesn't point one direction or another. And you're attempting to construct an argument on sand that fits what I guess are your preconceptions. Here's a simple argument: the ACH takes 3 days and crypto takes minutes. (If I'm being a bit simple it's because I've had this argument two other times this week. And both times it ended with them admitting I was right on the technical merits but a few days of extra speed weren't worth enough to justify it.)
The standard answer that I'm familiar with is that fiat is entirely controlled by national governments and they can and have totally fucked over currencies before, whether with asset seizing or with hyperinflation. Less of an issue in the West, very much an issue in much of the rest of the world.
In theory, Crypto gives you a currency that can't be a victim of inflation (though in practice it's treated by most as a speculative commodity instead).
In theory, crypto gives you an anonymous currency, so you can buy illegal things - which might well be medicine rather than recreational stuff, or so that you can protest against the government without them freezing all your bank accounts to starve you. In practice, most crypto currencies out there are not actually anonymous in practice, because it turns out that governments really like having a surveillance state and the USG is strong enough to enforce regs on crypto. (P.s. note that the good features of crypto here are basically the same as those possessed by physical cash, but also notice how much governments have been pushing society towards digital transactions and away from cash in hand, for exactly these reasons.)
Assuming it had a strong reason to do so, it seems like the US government has enough resources to just 51% attack any cryptocurrency that becomes a problem for it. So it's not clear to me that they really get you out from under government interference.
They don't need to do that. Nationalize the telcos[1], and reconfigure all the gateways to block, spy on, or modify Internet traffic however you want, until you've achieved whatever you like with the cryptocurrency. You can destroy it, hijack it[2], inflate it, deflate it, whatever you want.
If you want to evade government surveillance or control, the last thing you want to do is build in mission-critical reliance on a vast physical public infrastructure over which government already and inherently exerts great control.
What you probably need is some kind of crypto that can work via spread-spectrum radio on the 20 meter band. Then you're all set, unless the government starts prohibiting the sale and possession of shortwave radios I guess.
-----------------------
[1] You can probably omit step 1, actually, and just make a phone call to the CEOs of each business. They're not going to decline to do the guys who regulate their profit margins any little favor.
[2] Some people like to think this is impossible because they can anonymize their traffic, but I think that underestimates how far ahead of the game it is likely the NSA already is: https://www.technologyreview.com/2020/02/08/349016/a-dark-web-tycoon-pleads-guilty-but-how-was-he-caught/
No, crypto's just stuck between two different poles.
The use case for Crypto is obvious; you don't trust the government or the banks and you'd prefer to set up a trustless way to do electronic money transfers. In this case, think Monero. When hackers demand a $100 million dollar ransomware payment from Acer, they ask for Monero. Even if you're not a criminal, and most users aren't, you can be extremely confident that Monero can't be traced, controlled, or inflated by the central banks you don't trust. This has the advantage of a clear use case and the disadvantage of basically being associated with criminals.
On the other end, you have Etherium, which is both too complicated for me and basically just a weird tech company. This has the disadvantage of not having any idea why people would use it and the advantage of being drowned in VC money. Unsurprisingly, this is how most people think of crypto now, because people like money.
To quote John Stokes, who wrote very well about the Tornado Cash controversy(1), "At some time, you really do gotta pick between "selling out to the Man" and "revolution". Everyone gets the original use case and the original purpose but people can't get rich off that.
(1) https://www.jonstokes.com/p/ethereums-very-own-death-pit
I'm looking at a technical writing job where I need to create a Single Source of Truth from documentation where information has been copied and modified with multiple versions. I have an idea of how to do this (create a template, fill it with reliable information, and then offload all the conflicts into a 'conflicts in this topic' section below the template. Create an issue of these conflicts in Jira and then allocate time towards resolving them.)
What I'd like is some kindof authority to either show me a better way or else to help me justify the course I'm considering. All the writing is about 'why you should create a SSOT' and not how to manage the process itself.
Cryptofascism in action on Scott Alexander's substack: https://astralcodexten.substack.com/p/open-thread-250/comment/10493983
For all of our daytrippers from DSL, the report comment button is under the three dots.
I think a lot of the old stagers over on DSL are perfectly familiar with the Passion Flower 😁
This is pretty obviously in bad faith and worse in a way, it’s unoriginal.
Gore Vidal went down this road on live television over 50 years ago. Though Vidal did use the term crypto Nazi in reference to Bill Buckley instead of crypto fascist the schtick is made of the same crap.
Buckley’s response here was just as terrible, calling Vidal a ‘little queer’ and threatening to ‘sock him.’
https://video.pbsnc.org/video/independent-lens-crypto-nazi-and-other-insults
This is a really bizarre deflection from the use of a mob to disrupt the peaceful transition of power by an authoritarian type strongman backed by the theocratically inclined religious people. That's fascism; the cryptofascism is in the people here all fumbling around trying to pretend otherwise.
Did you link the wrong post? Nothing in it is about authoritarian strongmen, mobs, or theocracy.
I doubt your good faith in this discussion.
Where as I am _certain_ that your contribution isn't good faith, regardless of what you believe about it.
You are a mind reader I see. What am I thinking now? Hint - it’s that you are a troll.
The cryptofascist party would like to remind everybody that the goal of this place is to have interesting and enlightening conversations. The opportunity cost for being here is sitting on your porch, watching the leaves change and unironically enjoying everything pumpkin spice. If a discussion isn't informative or enlightening, life's too short to argue on the internet when the trees outside are so beautiful.
In a post ironic world, all expression is genuine.
*shrug*
I genuinely don't think this is going anywhere. I'm not sure I'd call it a flame war but it certainly doesn't seem productive or enlightening. Instead it looks like a ton of bad internet arguments I've seen before.
I also, genuinely, enjoy watching the leaves change and pumpkin spice creamers in my coffee. These things are just obviously good; I could no more ironically enjoy them than I could ironically enjoy a cookie. It's a cookie.
This guy is using the same rhetorical technique as a particular recently banned individual who was coming from the other side of the political spectrum. Who knows? Maybe it’s the same guy and he just likes insulting people and causing a row.
"What if the lineup of people expressing the same mainstream and therefore extremely popular criticisms, NPC arguments we don't have to take seriously, is just the same guy" lmao
Do you really think I was talking about the cookie? That's funny.
There is no ironically claiming to be a cryptofascist.
Linking to another comment on this thread which is your own accusation of cryptofascism is obviously unnecessary. It is devoid of kindness.
Also devoid of truth, BTW, earning a “perfect” score of 0 out of 3.
Yeah, I'm on the same ideological side as them, and even I recognize that Impassionata is going to deserve their ban when Scott gets around to it.
"I'm not a cryptofascist, I just inject myself in conversations about race to smugly imply you don't have any good arguments against racism"
Haven’t heard a single one out of you yet. Not that I need to hear any: to me, racism is self-evidently stupid and bad. But I have heard quite a lot of ad-hominem invective out of you, which is not in line with community norms.
Going way out of your way to tell someone you don't think they have any good arguments against racism is in line with community norms? That's what you think?
It's only in your delusional worldview that I'm required to enter into the conversation against racism. I decline. I take the direction you steer the conversation as evidence that you want to have that conversation, which tells me that you want the conversation to happen. More people talking about race is how cryptofascism works.
Now you're advocating for me to be banned because I attacked you with my arguments, which you've ignored in favor of glowering at how I didn't make arguments against racism that I never committed to making. This is bad faith participation in a comically direct fashion, but we'll see if moderation is autistic--sorry, 'quokka'--enough to be fooled by your snivelling.
If people are idiots easily persuaded into performing the nazi disco dance party rhetorical moves, I want to believe that they are idiots. Call that invective if you like! I shall laugh at your petty word games and insist once again: if you proclaim yourself to be against racism and racists, let's hear your strongest arguments against the inclusion of racists in a community.
Otherwise you're full of shit.
Hah! K thx, bye flamey troll.
That's your opinion of course.
I remember reading a study where the authors wrote two identical papers about political violence, but they simply replaced "left wing violence" with "right wing violence" in the second. They then tried to get them published and tracked the results. Does anyone know about it ? I can't find it anymore.
If you're still looking for this, this was in the "Unsafe Science" substack by Lee Jussim.
https://www.psychologytoday.com/intl/blog/rabble-rouser/201309/liberal-bias-in-social-psychology-personal-experience-i
Should you put your university grades on LinkedIn?
On the one hand, if you don't put them on I suspect viewers will think you are hiding something and may think that you are not competent. This is probably a particular concern for black students, given that viewers may make incorrect inferences about their grades based on statistical data.
On the other hand, if they are put on your profile it might seem like showing off (if they're really good), and indicative of a kind of insecurity that I'm the kind of person who needs to show off their grades. Also, it might make others that have worse grades feel bad about not having similar achievements.
How do people normally deal with this?
If you're a recent graduate, putting your GPA on your resume is a generally accepted practice that won't make people think you are trying to brag. Sometimes, when you're looking for entry-level positions, grades can make a difference. I'd assume that holds for a LinkedIn profile if it's being used to look for entry-level jobs in a field you just got a degree in. Particularly if you went to a second- or third-tier school.
After a few years, it gets stale and people will want to know what you've actually been working on, so take down the GPA and start talking about your projects.
Don't post grades for each class you took; the people who care will ask for a transcript anyway.
I rarely need to see grades to know whether someone groks a field in which I'm competent to hire. Usually seeing their overall trajectory is enough for a first approximation, and then when I talk to the person I can figure it out very quickly.
What seems likely to be more useful is to add a short note about courses you've taken that one wouldn't ordinarily expect you to take as part of the degree, e.g. if you have a degree in computer programming but you've taken quantum mechanics and 3 years of Mandarin, that's maybe worth noting because it might make you stand out for some particular job.
I have never worked anywhere that required or even reviewed grades, outside of education. If you are trying to work in a field that is education-based, the expectation that you share your grades may be higher, but that can be achieved through official transcripts.
Nobody cares about your grades after you land your first job, even if they care at first.
I suggest not putting in anything except where you studied, whether or not you graduated, and your GPA if the system has provisions for it. And omit the GPA if you are already working in the industry and wouldn't be interested in an entry-level position. If a prospective employer really wants to know your grades, they can ask for a transcript as part of the application process.
Aren't those prediction markets easily manipulated?
1: Publicly place a large bet on your own trustworthiness through 2023
2: Through a sock puppet place a smaller bet on your committing fraud in 2024
3: Allow the first bet to shift odds against the second
4: On Jan 1 2024 after collecting your modest reward, commit fraud. Reap rewards of fraud. Then reap rewards from betting on fraud while presumed honest.
My hypothesis for why your subconscious wanted to put your thoughts on FTX on an open thread:
The criticisms of FTX and people saying "I told you so" will have to share space (on an incredibly slow loading page) with a bunch of self promotion, making it less forceful.
I suspect this idea was meant as gentle self-parody—it's basically a proposal to double down on the precise epistemic errors that led to this fiasco.
Investing in a risky new venture is *already* a prediction market on, inter alia, whether the founder will turn out to be a fraudster. Accordingly, the collapse of FTX is a reminder that whatever advantages markets have in theory, for questions like this you can’t always rely on the average of many opinions weighted by the amount of money each is willing and able to bet on being right. People with money to bet on their opinions about a niche topic have collective blind spots, ideological biases, and susceptibility to charisma and deceit just like any other group.
A better update, in my own ideologically biased opinion, would be towards the golden rule of wokeness: beware the wealthy and powerful, because in a dog-eat-dog world it’s nearly impossible to make it to the top by being kind and upstanding.
If this indeed self-parody, I personally couldn't distinguish it from the other things that rationalists write. Don't know wether that reflects badly on me or the rationalists.
I don't think you need to be especially suspicious towards wealthy people who got wealthy thanks to labor (e.g doctors), but perhaps should be a bit more suspicious of people who got rich thanks to speculation/networking.
Progressives often try to draw a distinction like that—a doctor whose independent practice has finally paid off their loans and then some after a long career is one thing, an MD pulling in a few million a year from pharmaceutical companies in speaking fees and expert witness honoraria deserves a different level of epistemic scrutiny. E.g. AOC likes to post about the difference between $1e6 (moderately suspicious "movie star" money) vs. "systems money" on the order of $1e9, which she argues can only be achieved through various kinds of ruthless exploitation. https://pbs.twimg.com/media/EnpfW3QUwAApRxQ?format=jpg&name=large
This is in general the epistemic virtue I think rationalists could learn from wokeness: attending not only to the first-order merits of an idea, but to which systems of power had to be appeased and whose interests were served in the process of generating the idea and its premises. Ironically the community is very comfortable with this analysis when the powerful ideologues being appeased are, like, faculty search committees and woke Reddit admins, but is excessively focused on these motes in woke eyes while ignoring the beams resulting from centuries of intellectual deference to wealth, whiteness, and other historical sources of power.
How convenient for AOC that having mere "millions" of dollars is okay but being a billionaire is highly suspicious and the fruits of systemic racism - or systemic mumble mumble anyway.
I have the strangest feeling that her total worth might be getting up to the million mark. For somebody working a low-paid job that pays hourly rates, a millionaire is just as suspicious as a billionaire is to a millionaire.
This site says it's "pants on fire" that AOC has a worth around a million, but that they quote the following makes me wonder about how purely truthful and not at all using totally legal workarounds to reduce tax liability it is:
https://www.politifact.com/factchecks/2021/mar/10/facebook-posts/aocs-net-worth-over-1-million-s-pants-fire/
"Rep. Alexandria Ocasio-Cortez's latest financial disclosure form showed assets of between $2,003 and $31,000.
Her liabilities were listed at between $15,001 and $50,000, indicating that she may have a negative net worth. "
What was it Mr. Micawber said? I think he would judge "Assets $31,000, Liabilities $50,000,, Net Result $19,000 in the hole" to be a good sign she should give up politics and find a paying job 😀
I guess you get a rationalist point for Googling your irrelevant ad hominem and admitting that it's not remotely true? But it's pretty disappointing that you immediately pivot from "AOC is maybe rich so her analysis of wealth and power is not worth engaging with" to "AOC is maybe poor so her analysis of wealth and power is not worth engaging with".
Why not respond to the actual claim she and I are making: in a world where many smart and talented people ruthlessly compete for wealth, often in clearly antisocial ways, we should have a high prior that the very few who get to the $1e9 range have done unethical things to get there?
> in a world where many smart and talented people ruthlessly compete for wealth, often in clearly antisocial ways, we should have a high prior that the very few who get to the $1e9 range have done unethical things to get there?
Motte: "high priors on theft"; bailey: "certainly a theft".
I am curious, what exactly are your priors on "if someone has X wealth (as a reasonable person would calculate it, not after all kinds of tax evasion), they made it unethically" for $1e6, $1e7, $1e8, and $1e9 respectively?
Sorry for getting back to you so late on this one, was doing other stuff and forgot.
I did have a couple of longer answers planned out, but since I am also watching the opening ceremony of the World Cup right now (dear lord please finish up and start with the actual football, nobody cares about bad pop), this is going to be fast and cheap.
(1) Rationalist points? Are those like Green Shield Stamps?
(2) AOC is a brand, a carefully crafted selling point. Ocasio-Cortez is about as grassroots as artificial turf, she was part of a kingmaker campaign and I do have to hand it to her, she got elected and re-elected, so congrats on that. But all the "Sandy from the block" stuff is PR image
(3) She's a slick career politician, and after pushing a bit too hard in her first year, she has now settled down to a career as (probably) reliably getting re-elected in the gentrified constituency she ran for, and will continue to issue hot takes to keep her name, and brand, alive and current in the media
(4) That's it, basically I don't take her any more seriously than any other politician who has found a niche and is exploiting it. I guess she could do a whole "turning up dressed in white to scream and cry outside Twitter HQ" in her Billionaire Protest, just like she did outside the car parking lot for the Illegal Immigrant Protest.
I think the information in the column I excerpted above reflects very poorly on Sam Bankman-Fried. His subjective intentions may have been quite benign. But, it appears that he was extremely reckless in the way he organized and ran FTX's business.
If you expect to have strangers entrust you with billions of dollars of their property, you must at the very least keep meticulous records of how much you received, who you received it from, and the conditions of receipt. You must also keep equally meticulous records of what you did with the property you received. etc. Those records ought to be able to be used to produce a high quality balance sheet at all times. The fact that they couldn't is telling.
Right now, I would say that SBF is in very deep legal trouble, that he is likely to be indicted, convicted, and jailed for committing fraud.
Happy quarter-thousandth open thread, everyone!
I understand that Robert Wright went on Bret Weinstein's YouTube channel (the Darkhorse Podcast) a couple of weeks or so ago, to debate Eric Weinstein's probably-crackpot theories/models and perhaps other things. Does anyone know whether/when Bret Weinstein will post this debate? I was really kind of looking forward to it. I see no signs of this showing up on Bret Weinstein's YouTube channel and am even wondering if Wright misspoke and he spoke directly to Eric instead, but searching "Robert Wright Eric Weinstein" isn't turning anything up either.
This is a small excerpt of a column on Bloomberg.com by Matt Levine, formerly an editor of Dealbreaker, an investment banker at Goldman Sachs, a mergers and acquisitions lawyer at Wachtell, Lipton, Rosen & Katz, and a clerk for the U.S. Court of Appeals for the 3rd Circuit. I have not included quotation marks but what follows is direct quote, but it does not include links or footnotes. It is not my opinion as I have no first hand knowledge of the facts:
Money Stuff: FTX’s Balance Sheet Was Bad
By Matt Levine • https://www.bloomberg.com/opinion/articles/2022-11-14/ftx-s-balance-sheet-was-bad
The box
... the balance sheet that Sam Bankman-Fried’s failed crypto exchange FTX.com sent to potential investors last week before filing for bankruptcy on Friday is very bad. It’s an Excel file full of the howling of ghosts and the shrieking of tortured souls. If you look too long at that spreadsheet, you will go insane. ...:
Sam Bankman-Fried’s main international FTX exchange held just $900mn in easily sellable assets against $9bn of liabilities the day before it collapsed into bankruptcy, according to investment materials seen by the Financial Times.
... And yet bad as all of this is, it can’t prepare you for the balance sheet itself, published by FT Alphaville, which is less a balance sheet and more a list of some tickers interspersed with hasty apologies. If you blithely add up the “liquid,” “less liquid” and “illiquid” assets, at their “deliverable” value as of Thursday, and subtract the liabilities, you do get a positive net equity of about $700 million. (Roughly $9.6 billion of assets versus $8.9 billion of liabilities.) But then there is the “Hidden, poorly internally labeled ‘fiat@’ account,” with a balance of negative $8 billion. [1] I don’t actually think that you’re supposed to subtract that number from net equity — though I do not know how this balance sheet is supposed to work! — but it doesn’t matter. If you try to calculate the equity of a balance sheet with an entry for HIDDEN POORLY INTERNALLY LABELED ACCOUNT, Microsoft Clippy will appear before you in the flesh, bloodshot and staggering, with a knife in his little paper-clip hand, saying “just what do you think you’re doing Dave?” You cannot apply ordinary arithmetic to numbers in a cell labeled “HIDDEN POORLY INTERNALLY LABELED ACCOUNT.” The result of adding or subtracting those numbers with ordinary numbers is not a number; it is prison. ...
For a minute, ignore this nightmare balance sheet, and think about what FTX’s balance sheet should be. ... But broadly speaking your balance sheet is still going to look roughly like:
Liabilities: Money customers gave you, which you owe to them;
Assets: Stuff you bought with that money.
And then the basic question is, how bad is the mismatch. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy. FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t! ...
If you think of the token as “more or less stock,” and you think of a crypto exchange as a securities broker-dealer, this is completely insane. If you go to an investment bank and say “lend me $1 billion, and I will post $2 billion of your stock as collateral,” you are messing with very dark magic and they will say no. The problem with this is that it is wrong-way risk. (It is also, at least sometimes, illegal.) If people start to worry about the investment bank’s financial health, its stock will go down, which means that its collateral will be less valuable, which means that its financial health will get worse, which means that its stock will go down, etc. It is a death spiral. ...
In round numbers, FTX’s Thursday desperation balance sheet shows about $8.9 billion of customer liabilities against assets with a value of roughly $19.6 billion before last week’s crash, and roughly $9.6 billion after the crash (as of Thursday, per FTX’s numbers). Of that $19.6 billion of assets back in the good times, some $14.4 billion was in more-or-less FTX-associated tokens (FTT, SRM, SOL, MAPS). Only about $5.2 billion of assets — against $8.9 billion of customer liabilities — was in more-or-less normal financial stuff. (And even that was mostly in illiquid venture investments; only about $1 billion was in liquid cash, stock and cryptocurrencies — and half of that was Robinhood stock.) After the run on FTX, the FTX-associated stuff, predictably, crashed. The Thursday balance sheet valued the FTT, SRM, SOL and MAPS holdings at a combined $4.3 billion, and that number is still way too high.
I am not saying that all of FTX’s assets were made up. That desperation balance sheet lists dollar and yen accounts, stablecoins, unaffiliated cryptocurrencies, equities, venture investments, etc., all things that were not created or controlled by FTX. [5] And that desperation balance sheet reflects FTX’s position after $5 billion of customer outflows last weekend; presumably FTX burned through its more liquid normal stuff (Bitcoin, dollars, etc.) to meet those withdrawals, so what was left was the weirdo cats and dogs. [6] Still it is striking that the balance sheet that FTX circulated to potential rescuers consisted mostly of stuff it made up. Its balance sheet consisted mostly of stuff it made up! Stuff it made up! You can’t do that! That’s not how balance sheets work! That’s not how anything works!
Oh, fine: It is how crypto works. ... It looked like a life-changing, world-altering business that would replace all the banks. It had a token, FTT (and SRM), with a multibillion-dollar market cap. You could even finance it, or FTX/Alameda could anyway: They could put FTT (and SRM) tokens in a box and get money out. (From customers.) They could take the dollars out and never, youher sens know, give the dollars back. They just got liquidated eventually. And those tokens, FTT and SRM, were sort of like real monetizable stuff in some senses. But in others, not.
But where did it go?
I tried, in the previous section, to capture the horrors of FTX’s balance sheet as it spiraled into bankruptcy. But, as I said, there is something important missing in that account. What’s missing is the money. What’s missing is that FTX had at some point something like $16 billion of customer money, but most of its assets turned out to be tokens that it made up. It did not pay $16 billion for those tokens, or even $1 billion, probably. [7] Money came in, but then when customers came to FTX and pried open the doors of the safe, all they found were cobwebs and Serum. Where did the money go?
I don’t know, but the leading story appears to be that FTX gave the money to Alameda, and Alameda lost it. I am not sure about the order of operations here. The most sensible explanation is that Alameda lost the money first — during the crypto-market meltdown of this spring and summer, when markets were crazy and Alameda spent money propping up other failing crypto firms — and then FTX transferred customer money to prop up Alameda. And Alameda never made the money back, and eventually everyone noticed that it was gone.
So Reuters reported last week:
At least $1 billion of customer funds have vanished from collapsed crypto exchange FTX, according to two people familiar with the matter.
The exchange's founder Sam Bankman-Fried secretly transferred $10 billion of customer funds from FTX to Bankman-Fried's trading company Alameda Research, the people told Reuters.
A large portion of that total has since disappeared, they said. ...
Oh, a place where the topic is Ukrainian FTX transactions, as opposed to some missile landing in Poland close to the Ukrainian border. From the first impression it very much looks like some unfortunate mistake ... but the level of nervousness in my social networks is considerable.
It's almost certainly a mistake, not clear whose. At least one of the missiles appears to have been an S-300 surface-to-air(ish) missile, which Ukraine uses against Russian cruise missiles and which Russia now uses against Ukrainian cities because they're running out of cruise missiles. It looks like the people who matter are taking the time to get the facts before making rash decisions, which is good.
There will almost certainly be a NATO Article 4 consultation. There will not be a war between NATO and Russia (or NATO and Ukraine) over this; we've tolerated much more egregious and deadly mistakes when it was reasonably clear they were mistakes, e.g. KAL-007 or Siberia Air flight 1812. And nobody believes that anybody had a motive to do this deliberately.
Thanks! This was definitely the most precise assessment of the issue I had seen that evening.
Quite busy now, maybe one or two thoughts to add on that later.
I just wanna remark how the currently 1416 comments on this thread expose how utterly crap substack is as a piece of technology. It takes my gaming PC ~20 seconds to load the top of the comment section, and I'm getting repeated multi-second freezes while I'm writing this comment.
Agreed. On wonders how people were able to implement web-based message boards in the 90s when PCs were much slower.
Does substack provide a comment API? If so, perhaps someone could write a usable client or something.
I think the problem is the same as what we saw back on SSC: we are using software designed for publishing and light commenting as a high-volume discussion forum. It's not surprising the software is straining under the load. The DSL forum, whatever its other faults, is crisply responsive since it is running software actually designed for running discussion forums. The ACX Discord also works fine, for the same reason.
While there were limitations to the SSC (wordpress, probably?), such as the indentation eventually eating most of the screen real estate in long discussions, I think the comments were assembled server-side (by php?) and thus reasonably fast. This would also mean that you could just load an open thread and read through it offline (e.g. on a plane).
I would guess that substack uses lots of javascript (and js libraries) to dynamically fetch the comments and render them. This approach has some advantages: you can get new comments without refreshing the website. But unless you do it very well (which they don't), it also tends to slow everything down.
Yes of course. As I made clear multiple times, the accusation was that flows went in the other direction, Ukraine diverting US aid to FTX for crypto of dubious worth so FTX could donate back to the politicians who passed the aid bill.
But evidence of this direction occurring is lacking.
Your comments are in the wrong place. Stop replying to e-mail.
True but stop with the “signal-boosting” crap. I was seeing it mentioned in a lot of places so it seemed worth adding here as a possibility to be considered. It’s a frigging OPEN THREAD. I appreciate very much the people on this thread who gave reasons why the story was likely to be wrong but the people who instead suppressively implied that I should STFU should STFU.
Is this a good time to update my priors to "never trust anyone or anything ASSOCIATED WITH CRYPTOCURRENCY again"?
That was always the wise decision. If you are into big risk chasing potential big rewards, cryptocurrency is a potential avenue with larger-than-normal risks and rewards.
If you're into weird alternate ways of buying things, cryptocurrency is also a way to do that (though the highly fluctuating values make it problematic to use for that purpose!).
What's the longest period until positive returns we can find for an investment? Can we find something that, for example, required fifty years of payments to get things working, but yay in the fifty-first year it began producing returns? I guess this would we particularly interesting if we found something that in the end turned out to be a good investment, despite the extremely long period of negative returns.
Here's an analysis that suggests a big nuclear power plant doesn't start returning a positive ROI until 13 years after the initial investment. It starts becoming more profitable than an equivalent gas-fired plant only after 18 years.
https://youtu.be/cbeJIwF1pVY
How long have we been funding research into fusion power? Seventy years, maybe?
I was going to mention fusion, too. But you asked for things that eventually produce positive returns..
In 1830, the Swedish Navy planted oaks on Visingsö for ship building. The keeper reported back that the oaks were ready for harvest in 1975. https://www.atlasobscura.com/places/visingso-oak-forest
An individual stand of trees in a plantation forest will take roughly that long to grow before it can be cut down and sold. But of course that's more of a known quantity than the kind of thing you're thinking about (until some beetle comes along and kills them all) and you can put down x number of 30 year old trees as an asset on your balance sheet.
EVERYONE, want to understand what happened? IMO, this article by Matt Novak gives the best start for understanding FTX & Samuel Bankman-Fried. It also works as an introduction to cryptocoins at large. Then, to understand how people got bamboozled, from the inside, read the NYT article it corrects. The NYT writer has yet to get wise.
https://gizmodo.com/nytimes-bizarre-softball-article-ftx-sam-bankman-fried-1849783646
Please, #9's “how can I ever trust anybody again?” is the wrong question, with a misguided answer.
If people promote Wrong, no matter how trustworthy the people, the Wrong remains wrong. The original bitcoin whitepaper is Wrong. Specifically, it spoofs monetarism. Possibly intentional IMO.
Ask a better question. "How can I make sure I know the basics in a field before I commit to a position in it, let alone commit resources?"
Can you elaborate on what you mean by "it spoofs monetarism"?
QTM is nonsense, & it the 'Satoshi Nakamoto white paper' presupposed QTM.
Thanks for your question. Apologies for not checking back sooner.
Please feel free to let me know if I have not given a sufficient answer.
https://www.businessinsider.com/personal-finance/quantity-theory-of-money
The accusation is probably FALSE, but it isn’t NONSENSE, you just misunderstood it. The accusation is that some of the aid money the politicians sent to Ukraine was used to buy crypto from FTX rather than spent on actual, you know, AID, and FTX then made huge amounts of political contributions.
The rebuttal is that all the transactions between Ukraine and FTX were in the other direction from that, which is fair. But the accusation was a coherent story.
That an accusation is a "coherent story" is a necessary but not sufficient condition for making or signal-boosting that accusation in polite society.
In general, the whole thing with EA seems like similar to many other things that appear ridiculous about rationalism. You take a simple idea that is eminently sensible when you put it in a few words. Charitable giving is often inefficient - what if we start evaluating charitable giving by how much bang for the buck you get for it? Very sensible!
Then you put it in a crowd of people with a few well-known features, like love of Big Ideas, addiction to novelty of new ideas or revisionist takes on existing ones, almost comical belief in the power of Reason in comparison to tradition/law/taught ethics/societal approval/etc (right to the name of the crowd), and a tendency for constant iteration -and soon the original idea starts mutating to new forms, so that soon you're giving all your money to the Computer God, or becoming utter caricatures of utilitarianism straight from the philosophical debates ongoing for decades and centuries or banking on gee-whiz businesses as long as they're aligned with the cause, or just opening yourself up to all manner of grifters and fast talkers in general.
The same applies to polyamory, or nootropics, crypto or all manner of political ideologies beloved by rationalists - not that the simple idea behind them is necessarily good to begin with, but even then it just all seems to get worse and weirder, and doing so quite fast.
What one seems to need is stopgaps, intellectual roadbumps - but even then, what would these be, who would set them, and how would you take care the movement doesn't just barge through them with the power of Reason, like with everything else?
Attention to detail has a factor of diminishing returns. The problem isn't with rationalism 'appearing ridiculous', it's that limited life and attention spans we're bound to have as a species make us gravitate towards practicality above precision and rationalism is the reverse - it's precision above practicality for its own sake and when iterated tends to create a bunch of ad absurdum scenarios, it alienates. Banging on philosophy is viewed as anti-intellectualism, even when all of its big questions can likely only be answered through science, it isn't going to generate new answers that could change much of anything but rationalism tries just that - often for smaller, more practical fields of inquiry.
So rationalism now is likely taking up the same spot that mainstream philosophy as an area of inquiry took up before becoming stagnant in modernity, but ends up eminently more mockable because it engages with smaller more commonly accessible, more impactful areas of knowledge than the 'big questions' and thus each failure of its inquiry becomes to status boost off of.
The issue you describe isn't a failure of rationalism, it's more of an attack angle on the generalist enthusiasts that gets used by the mainstream specialists in various areas seeing them as a threat as it disregards the PR/politics of a given community. E.g. EA makes wasteful signaling more apparent. But all principled alternative inquiry tends to generate new ideas and new data and is ultimately good for society as a whole.
Well, traditionally your roadbumps are experience in the real world, and the cautious conservatism to which it tends to give rise. Why didn't Warren Buffett lose a shit-ton of money from the FTX crash? Why didn't I? Because we're old. We've seen this movie before, many times, with different actors and different effects, but the same script and, alas, the same ending.
Experience isn't as eloquently verbal as Reason. Often Experience get tongue-tied, can't even explain *why* it feels Reason has reasoned itself into absurdity, found conclusions that will be savagely crushed when they come into contact with objective reality. Heck, if I ask the plumber why he does this-and-such while fixing my drains, I don't expect -- and don't get -- any treatise on hydrodynamics. Sometimes he can't even really explain it in terms we have in common at all. But I'd be a fool to substitute my own reasoning, however brilliant and apparently logical, for his experience.
That's not to say Reason isn't Queen, isn't the most profoundly useful intellectual tool we have, to be honored above nearly all else. But Experience is King, and like all partnerships this one works best when there is mutual respect and cooperation between the principals. Trying to use only one or the other just gets you mysticism, stagnation, or epic disaster.
Why didn't I lose any money from the FTX crash? Because I am too stupid and lazy to get into crypto savings, though I really liked Allen Farrington's writing. Being stupid and lazy has its upsides. I'd still buy some bitcoin if any of my kids would manage it for me.
A better way to put that is that you got lucky, in that you were lazy *before* you invested in FTX instead of *after*. If you got in but got out in time, you made bank.
And sure, good luck beats any amount of experience or reason, every time. That's why nobody ever beats James Bond at baccarat, despite his spending way more time eying the zaftig onlookers' cleavage than counting cards: because James Bond has infinite good luck. No amount of skill can beat that.
I heard there was a new blockchain tech that allowed people to invest some of their natural good luck (which is "mined" by a powerful GPU via a P2C2E) and get a return of 8% per annum. I'm going to sign up right away!
Experience by definition takes time to acquire and is largely non-transferable, but Reason doesn't necessarily have either of those constraints. Aristotle's Nichomachean Ethics has a line suggesting that it was a matter of general agreement at the time that while the young were never masters of "practical wisdom", they often could be exceptional at geometry and math, suggesting a long history of people agreeing with that observation. One might say Reason can travel halfway around the world before Experience can get its boots on.
True. And it's worth noting that hard experience can easily make you timid. If we didn't have the young egging us on to try stuff that on first appearance sounds lunatic, we'd never get anywhere and still be hunting antelope with sharpened sticks. If we didn't have the old saying now hold on just a God-damned minute, this is the same dumbass idea we tried in the winter of '49, the kids would blow civilization into smithereens. Like a lot of stuff, balance is the key, I think.
Very nicely put.
It's ultimately because of Reason that we're no longer freezing in mud huts, but live in comfort and are able to instantaneously communicate across the whole world for discussions on abstract topics like this one. You can't deny that Reason is pretty sexy and it's unsurprising that people tend to fall in love with it headlong.
Stopgaps and intellectual roadbumps develop when people run headfirst into debacles, say, like this one, a learning process that nobody yet found a way to short-circuit (and how would they do it otherwise anyway, if not by trying to apply Reason?)
I think you're missing the distinction between reason and Reason, the idea that pure intelligence and independent thought can come to better conclusions than other forms of knowledge (especially tradition and experience, which Rationalists explicitly reject).
To me, the Rationalists are often trying to reinvent the wheel while rejecting any existing knowledge about what wheels do and why. Sometimes you come up with a neat and novel idea, but sometimes you reinvent a dead end that society rightfully tossed a long time ago and end up down a bad path you can't get back out of.
It's like you didn't actually read the comment you're responding to.
The comment criticises rationalism and specifically gives many examples of alternatives.
< ..a crowd of people with a(n)... almost comical belief in the power of Reason in comparison to tradition/law/taught ethics/societal approval/etc >
So maybe the problem is that unadulterated 'on-steroids' reason leads to all sorts of crazy towns, repugnant conclusions and people with moral vacuums causing great suffering in the world. And maybe, just maybe, some of those other things might be used to temper the pernicious results of relying on pure reason.
And your response is -
< 'and how would they do it otherwise anyway, if not by trying to apply Reason? <
Are you perhaps a rationalist?
>tradition
Mud huts are very traditional, and yet somehow pretty much nobody is enthusiastic about returning there.
A big tradition of the modern civilization is to discard outdated traditions, and that implies taking risks. You don't get to have progress without making mistakes.
>And your response is
No, my response was that running headfirst into debacles is inevitable, as that is the only way that people eventually learn in practice. Rationalists do think that it's possible to use Reason to avoid them, and I agree that it's naive.
I think there's homeless people e.g. in California who want to build mud huts, but building codes don't let them.
Fair response. Thanks
It's not Reason that is the problem, it's the bubble that the Reasonable are all living and working and socialising and networking and going to conferences and writing well-received little think-pieces in.
Congratulations, EA has now reached its "how many angels can fit on the head of a pin?" stage. (Which was never an actual proposition but we'll get into that later). Scholasticism *did* ossify into a system more and more removed from any practicality and more and more interested in logic-chopping (I will grant this much to Luther and the Reformers) because it got into its own little bubble of jargon and specialisation and 'the normies are too IQ 90 to grok this' attitudes.
If one of the results of this entire mess is that the Rationalists start to puncture their bubble, all to the best. Scholasticism needed the boot up the backside to flower again.
Now, the "angels on the head of a pin" thing. It's a pop culture reference and like many pop culture references has forgotten its original roots in Protestant polemics. I had a vague sense that it was resurrected/re-popularised by Isaac D'Israeli in one of his volumes of essays:
https://www.gutenberg.org/cache/epub/21615/pg21615-images.html
"The reader desirous of being merry with Aquinas's angels may find them in Martinus Scriblerus, in Ch. VII. who inquires if angels pass from one extreme to another without going through the middle? And if angels know things more clearly in a morning? How many angels can dance on the point of a very fine needle, without jostling one another?"
(For who or what Scriblerus is: https://en.wikipedia.org/wiki/Memoirs_of_Martinus_Scriblerus)
But Wikipedia doesn't even mention him, and he probably did pick it up from 17th century polemics via Pope and his circle, as per the Wiki article.
https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_the_head_of_a_pin%3F
The question is not itself unreasonable, as Dorothy Sayers says, but it has been used as a symbol of how abstruse and fruitless such over-logical debates are. Rationalism take heed?
"Dorothy L. Sayers argued that the question was "simply a debating exercise" and that the answer "usually adjudged correct" was stated as, "Angels are pure intelligences, not material, but limited, so that they have location in space, but not extension." Sayers compares the question to that of how many people's thoughts can be concentrated upon a particular pin at the same time. She concludes that infinitely many angels can be located on the head of a pin, since they do not occupy any space there".
I'm impressed you've read Sayers on theology, although not especially surprised. But the correct answer seems a trifle insipid, I liked better the cheeky one given by Dejah Thoris Burroughs (if memory serves) which (paraphrased) was: "Easy! Let A be the area of the head of the pin, and let B be the area of an angel's ass. The desired number is A/B. Carrying out the math is left as an exercise for the reader."
Could mastodon be hard enough to search that communities will have more time to grow?
I've read speculation that the internet itself may have developed a form of self awareness and perhaps could be considered a collective intelligence.. maybe I'm misremembering and it was my own speculation.
Nonetheless if it is anything like this thread, it is hopelessly in disagreement with itself and probably, as a whole, risk averse which means the internet consciousness is not an EA?
Wow, how many logical/factual errors can one cram into one post? Based on this post I have to say it is doubtful the internet-consciousness is particularly intelligent!
Unless Minsky et al are correct. 🙂
Is the whole 'internet consciousness' thing, just a long way around to saying "this post is wrong in many ways, but I'm not actually going to list any of them or make actual arguments about them?"
No. On all counts.
Just floating ideas.
Does everything submitted here have to be an argument? A debate? A discussion?
Or correct?
My fun place is questions, not answers!
The point of my question was to say your comment came across as that sort of comment. If that wasn't your intention, and you really want to talk about internet consciousness, then including a sentence like: "Wow, how many logical/factual errors can one cram into one post?" is probably unhelpful.
Yeah, I think you're right. I really need to speak more precisely or develop a high tolerance for the mishaps of miscommunicating. I choose the former. I also have noticed that a lot of the posters here have a great, haiku-like concision and can communicate a lot of meaning in just a few sentences. I tend to go on and on!
But if you have questions on any of the ideas, please ask and I'll answer as best I can! Personally I think it interesting that consciousness might be an emergent property of a sufficiently large, sufficiently interconnected network. Is the internet at or near that level? Dunno!
Agree that exchange of information doesn't imply language or consciousness. Nor does mere complexity. So what does? How do we prove that we ourselves are conscious? Pass Turing tests?
Our complex network of interconnected individually unconscious brain cells are likely its source, but where is it? Is it subdividable.? Big questions! The ghost in the machine. I've just recently started looking at philosophy again after a forty year absence and it looks like most of the open questions then are still open, but the birth and growth of the internet, social media, computer science, etc. have sure given us new lenses to look at old problems through!
> So what does?
It seems to have something to do with having a model of the self but the specifics are not yet known.
> How do we prove that we ourselves are conscious?
This is backwards. We define consciousness as a thingy that we have.
"Bankman-Fried" has at least a little bit of a kabbalistic ring to it.
It's incredible how this NYT piece whitewashes SBF: https://www.nytimes.com/2022/11/14/technology/ftx-sam-bankman-fried-crypto-bankruptcy.html
No talk about his criminality at all. He's Bernie Madoff and they portray him like Howard Roark. Pays to have good family connections.
I am under the impression that some people in this community may have trusted Tether based significantly on assurances from FTX/Alameda individuals that they trust Tether.
If so, it seems prudent to disregard such support, and reassess your trust of Tether without regard to any statements from FTX/Alameda sources.
Tether has been widely known to be suspect for 5+ years, before FTX was even a thing.
Sure, but people here may have been trusting Tether despite its suspicious nature because they trusted FTX/Alameda who had a vested interest in Tether.
For example, here: https://astralcodexten.substack.com/p/mantic-monday-scoring-rule-controversy
we have a comment from a user named "SBF" saying basically that they trust Tether.
Here https://www.reddit.com/r/slatestarcodex/comments/kzhuxb/the_bit_short_inside_cryptos_doomsday_machine/gjshq85/
we have a comment from Scott himself which takes a more mixed but IMHO still sounds like he was getting his info from FTX/Alameda, albeit someone there who cared enough to present a nuanced view that wouldn't encourage him to make huge bets on Tether either way.
My general point is that FTX/Alameda assurances about Tether should obviously no longer be trusted and people who used to trust FTX/Alameda should reassess accordingly. My examples show people here may have used FTX/Alameda figures as trusted sources of information about crypto due to overlap between FTX/Alameda and ACX readership.
To be fair, the question was about whether Tether would collapse in 2021, and SBF was right about that.
Yeah my estimate of Tether actually being a sane thing to trust has been extremely low for a long time.
Maybe all of this was *vaporware*?
Most people don't know what FTX is. Most people have no idea who SBF is. Most people have never heard of EA.
It it possible you were gaslighted into thinking this was the future of humanity, and now that the con artist has been outed the confrontation with reality feels a bit unbearable.
Most rationalists feel like they are way too superior to fall for the Nigerian prince scam, but that only makes them an easier pray for the slightly more advanced scam.
It looks like Ukraine made a lot of transactions with FTX but they are claiming it was only converting crypto to cash and not the other direction. I have not seen their claim rebutted so for now it’s a sufficient explanation, as it’s the other direction that would be involved in any aid-laundering scheme.
FWIW, when the war broke out I tried to donate to the Ukraine military directly via their national bank, using my credit card. I did this because I have a strong belief that national territory integrity is paramount to many good things, and allowing a world where borders are subject to invasion sets precedence for much bad and very little good.
Anyway, I tried to donate to the national bank and my credit card immediately fraud flagged it and stopped the transaction. Why? Because donating to foreign national banks is unusual, I guess? Anyway, moving money internationally has lots of barriers and costs but moving crypt has essentially none so I sent the money I wished to donate via crypto to Ukraine and that worked.
TL:DR; I can easily see a world where Ukraine had a bunch of crypto they needed to convert to useful currency and did so via an exchange (FTX).
You mean, the allegation is that "Ukraine" as a country stole money from itself? How is that supposed to work?
Presumably some Ukrainian officials might have stolen money "from Ukraine" via FTX somehow; which honestly sounds plausible, but what is the source for this Ukraine-FTX connection?
Its called a kickback. You know, sending a little money back to the politician that got you the big public contract. I'm not saying it happened in this case, but its not a nonsensical idea.
Yea, that sounds like coherent story, but pretty unlikely imho.
Ukrainian officials caught stealing aid, on the other hand, is something that imho is very likely to happen someday. After all, Ukraine is not exactly Switzerland, and it is getting huge sums of money; surely someone is stealing something. That is not a reason to stop helping them, to be clear
That's part of the problem - FTX was a big exchange so now anybody who had any dealings with them is going to be looked at as possibly suspect, either they got conned or they were in on the con.
Great way to shred everyone's reputation, Sam and company!
"some people have asked if effective altruism approves of doing unethical things to make money"
Curious why you use the broad formulation "doing unethical things", when you seem to be talking specifically about committing criminal fraud. When it comes to more general immorality, e.g. working for a company that significantly profits from animal exploitation, marketing unhealthy foods to kids, knowingly encouraging innumeracy to make a product or service seem more useful, is there really such a strong consensus in EA? Presumably the reasoning goes: your individual participation in ethically problematic markets only marginally increases the harm (someone else would do it anyway and on average either do a slightly worse job or need to be paid a slight bit more), whereas your donation creates an absolute benefit.
I got clued in quite early to what SBF/Alameda were likely up to, but I had the benefit of having access to some perspicacious people on Crypto Twitter.
https://twitter.com/powerfultakes/status/1562549806249426945
Prompted by the below discussion, I'm beginning to wonder whether I might be aphantasic as well.
At first I thought "of course not, I can visualize things just like seeing them", but when I actually try, I can only see a tiny little bit or have a sort of vague outline or impression of an image. No matter how hard I try, I can't visualize a full image, even a small one.
For example, at one point I imagined someone drawing and pointing a sword, and I could clearly see the shape of the sword moving around, but I couldn't see the person holding it at all!
Edit: On the other hand, I definitely see things while dreaming.
Re: due diligence: all it would have taken is to read the companies' balance sheets.
That's what Binance did during the day or two they were thinking about acquiring the failed companies, and that's why they walked away: they saw how much larger their debts were than their assets.
Lessons for EA-backed charities:
- Insist that donations go through transparent, trustworthy, solvent evaluators like GiveWell.
- Have these evaluators run regular financial audits of the largest donors.
- Don't spend money till you have cash in hand!
Binance and Changpeng Zhao are really the winners here, and I'm not so sure I'd trust Binance either, they seem to have had a rocky patch or two.
But whether he intended it, or just seized the opportunity, Zhao has triumphed over his rival Bankman-Fried and won their feud. You have to give the guy credit for knowing when the optimum moment to jump was. "We're going to pull your chestnuts out of the fire - oops we had a look at your balance sheet and no way" was a wonderful way of putting the kibosh on FTX.
Why would one expect the efficient-market hypothesis to hold, even approximately, for crypto?
Seems to me that a key ingredient to the EMH is evolution through natural selection: actors who are better at accurately pricing assets get richer at the expense of actors who are worse at it, and those who misprice assets run out of money and so lose their ability to distort the market.
But the big question with crypto is: is the entire industry going to suffer a catastrophic crash from which it will never come close to recovering? Even if it obviously is, we wouldn't expect there to yet have been any evolutionary pressure against market participants who fail to understand this.
An analogy: plenty of smart traders believe in Christianity, even though Christianity is (IMO) obviously nonsense. This is because there is no feedback system by which wrongly believing in Christianity is punished (I guess you could say the punishment is that they waste time ineffectually praying but that's very weak feedback). The EMH clearly doesn't apply to the hypothesis "traders can accurately divine how likely it is that Christianity is true" so why should it apply to the hypothesis "traders can accurately divine how likely it is that the crypto industry will soon crumble almost completely"?
Crypto might eventually settle down as a genuinely reliable unit of currency, but I think that's still about five to ten years away. Right now this is still 19th century "everyone sets up their own bank and/or investment firm and a lot of them crash and take all your money with them" territory.
Yeah. I’d love to see a currency not subject to the failures of fiat currency. But you’ll recognize that when the notable feature is that the bitcoin price of a loaf of bread is the same next year as it was last year. You don’t get filthy rich putting your money into a stable currency, you just protect yourself from the ravages of a fiat currency.
A unit of currency has to be universally accepted in shops, online etc. crypto isn’t that, nor is it a reliable store of value or medium of exchange.
What it is is a volatile digital asset class, backed up by nothing. Which is fine. Devil take the hindmost. Caveat Emptor. A fool and his money...
Yes! I don't think it's supposed to settle down, it's a reversion to 19th century decentralized banking because the central bankers are too openly crooked and incompetent.
The EMH proposes that nobody can profit from trading. Meaning there are no "actors who are better at accurately pricing assets".
That would be the "strong" version of the EMH. The "weak" version says that while it is possible to outperform the market through some combination of skill and hard work, there is no free money lying on the ground -- if there was something as simple as "every Wednesday afternoon, all tech company stocks are structurally undervalued" then people would notice, people would start buying tech stocks on Wednesdays, and pretty soon the observation would not be true anymore.
I don't think many people believe the EMH is literally true, though - e.g. it's definitely the case that if you phone up JP Morgan and ask them for a bid-offer spread on a particular asset, with nonzero probability they will make a mistake and you will be able to make some money off them. The EMH is useful as an approximation.
What I'm arguing is that there is no particular reason to expect that market prices in crypto are anywhere near rational, because there's no reason to believe that the market is accurately pricing in the risk of the permanent collapse of the industry.
The EMH doesn't imply that prices are rational. It implies that you can never tell if prices are rational.
Wikipedia says "The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information". To me this is equivalent to "prices are rational".
From what I've read about the EMH, that is not what Fama meant by it. I wouldn't trust Wikipedia on this concept. A popularization of the EMH is not the EMH.
What would happen if a fully rational person who was aware of their own rationality believed your version of the EMH? They would reason thus: "Suppose that there exists an irrational market price. Since it is irrational, there must be some logical argument that I, a rational person, could follow to deduce that it is wrong. But I would then know that it is irrational, violating the EMH. This would be a contradiction, therefore all market prices must be rational." At this point the rational person has proved that all prices are rational, which is the thing your version of the EMH said could never happen.
It is also false.
I tried writing a story in the voice of a near-future chatbot: maybe what GPT-7 could sound like. It seems to me like the only way we've figured out to make large language models work is through attention-only transformer models, which myopically focus on next-token prediction. This means they can keep getting better at finding superficial associations, reaching for digressions, doing wordplay, switching between levels of "meta", etc, but won't necessarily cohere into maintaining logical throughlines well (especially if getting superhuman at next-word prediction starts pulling them away from human-legible sustained focus on any one topic).
This seems like it could pose a serious problem, given that the only way we've figured out how to produce (weak) AGIs is through these large language models. In other words, if you want an artificial agent who can perform tasks they weren't trained for, your only good option these days is asking GPT to write a scene in which an agent performs that task, and hoping it writes a scene where that agent is sincerely good at what you want, instead of being obviously bad or superficially good. (Ironically, GPT isn't best thought of as an agent, even though it produces agents and environments, because it isn't optimizing any reward function so much as applying an action principle of sorts, like physics; see "Simulators" by Janus at LessWrong for more). It may seem surprising that the best way we have to make any given character (or setting) is to make one general-purpose author, but it makes sense that throwing absurdly superhuman amounts of text data at a copycat would work well before we have better semantic / algorithmic understandings of our intelligence, because the copycat can then combine any pieces of this to propagate your prompts, and combinatorial growth is much faster even than exponential. If these "simulators" keep giving us access to stronger AGIs without improving their consistency over time, we don't have to worry so much about paperclip maximizers or instrumental convergence, but rather about our dumb chaotic human fads (wokeism, Qanon, etc) getting ever more speed and leverage.
I also tried keeping to bizarre compulsive rules while writing this--mostly not ending one word with the sound which begins the next word, mostly not allowing one sentence to contain the same word multiple times, etc--because I think we'll see strange patterns like that start cropping up (much as RLHF has led popular GPT add-ons to "mode collapse" into weirdly obsessive tics). As I practiced this, I felt like I was noticing discrete parts of myself notice these sorts of things much more, which fed that noticement further, until it blotted out other concerns like readability or plot, until even the prospect of breaking these made-up arbitrary taboos felt agonizing; I imagine this is sort of what inner misalignment "feels" like. Maybe that's also sort of like what Scott mentioned recently with regard to cultivating "split personalities." Anyway:
https://cebk.substack.com/p/the-singularity-is-fear
> I hope the investigation finds some reasonable explanation, like that they were doing so many stimulants
One source suggests SBF was on EMSAM / Selegiline (twitter.com/AutismCapital/status/1592237980458323969), which causes pathological gambling, compulsive buying, compulsive sexual behavior, and binge or compulsive eating. Ticks all the boxes.
tandfonline.com/doi/abs/10.1080/14737175.2019.1620603
sciencedirect.com/science/article/pii/S1353802007002088
pubmed.ncbi.nlm.nih.gov/16730214
Honestly, I'm starting to feel like one of those old wives who was clucking away in the background about "don't do drugs and don't do weird drugs that you think make you smarter than anyone" and of course the kids ignore all that advice and then the old biddies get to say "I told you so".
The impression I am getting is that the American medical system is banjaxed (but then, aren't they all?) and if you know how to game the system, doctors will write you out prescriptions for all kinds of medication on the grounds that you need it to do well in school/work/learn how to tie my own shoelaces. If you're too honest/poor/lower class, you get written off as a drug seeker or not having that problem, so you can't access all the legal stimulants middle-class college kids can.
I don't know if that is a true picture, but it's the one I'm getting from all these reports.
When I read the linked Eleizer article about the ends not justifying the means, I was surprised to find that nobody in the comments mentioned Original Sin, since he in many ways recreated the idea there. Like in this quote:
"But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe."
That rings to me of a hundred sermons I've heard on humankind's sinful nature: that we are corrupted beings who when following their own desires and reasoning inevitably go wrong. It reminds me of Paul, writing:
"For I have the desire to do what is good, but I cannot carry it out. For I do not do the good I want to do, but the evil I do not want to do—this I keep on doing. Now if I do what I do not want to do, it is no longer I who do it, but it is sin living in me that does it. So I find this law at work: Although I want to do good, evil is right there with me. For in my inner being I delight in God’s law; but I see another law at work in me, waging war against the law of my mind and making me a prisoner of the law of sin at work within me."
Of course there's a big difference between Eliezer's conception and the classical Judeo-Christian one: namely, Eliezer believes that a perfect intelligence without corruption would be able to act out utilitarian consequentialism accurately, and that would be the right thing to do. On the other hand, is this actually so different from the Judeo-Christian conception? God does a lot of things that would be considered wrong for humans to do: Christians tend to justify it on non-utilitarian grounds (ie, God can kill people because we all are His rightful property in some sense, or something similar) but you could also justify them by Eliezer's criteria: as an non-corrupted superintelligence, perhaps God can make those kind of utilitarian decisions that we corrupted and sinful man cannot. He can decide to wipe out all of humanity except one family in a flood, because he can calculate the utils and knows (with the certainty of an omniscient superintelligence) that this produces the best result long term, that the pre-Deluge population is the one man on the trolley tracks that needs to die so that the five men can live. Certainly Leibniz's idea that this is the best of all possible worlds rests firmly on that same justification: that all the bad things caused or allowed by God are justified in the utilitarian calculus, because all alternate worlds would be worse.
I don't know if I buy all that, but it surprised me how rationalists find themselves re-inventing the wheel in some cases. More power to them, better to re-invent it then have no wheels at all.
https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans
But God is not the guy deciding whether to divert the trolley or not. God can poof the entire trolley out of existence, if he chooses. Given that he is not only omniscient but also omnipotent, if he STILL chooses to kill the one person to save five, he is being unambiguously immoral--regardless of whether you use utilitarianism or deontology as your moral framework.
Things get complicated when you bring things out to God's scale, so let me try to cliff notes Leibniz's Best of All Worlds thesis:
Clearly, there are bad things in the world: like trolleys crushing people to death. Clearly, if God is omnipotent then he could poof these bad things out of existence. That they exist at all means that God wants them to exist, inasmuch as He at minimum allows them to exist and, more importantly, created the universe in the first place.
If God is omnibenevolent, then this leads to a problem: why would an omnibenevolent God allow bad things to exist when He has the power to make them not exist?
Liebniz's answer comes from God's other famous omni: omniscience. God knows all, including the exact results of any actions He takes and how they will change the future going out to infinity. We may look at our world, where trolleys kill with impunity, and ask ourselves "Why doesn't God just poof the trolley out of existence before it crushes anyone?" Yet, we can't predict what the effects would be if we lived in a world where things poofed out of existence any time they would harm someone. Would that world be a better world than our own? What would the long term effects be? We don't know. God does know. So, if trolleys remain unpoofed, and if God is omnipotent, omnibenevolent, and omniscient, then it logically follows that a world where trolley's poofed before crushing people would be, on the whole, worse than the world we currently live in. Indeed, it follows that our universe is the best possible world there could be, since God would have created a different universe if it was better.
A natural rejoinder would be "If God is omnipotent, then he could poof the trolleys and also use his power to make that world better." However, this is a flawed understanding of omnipotence. Omnipotence means that God can do anything that it is possible to do, but some things are logically impossible. God can't make someone a married bachelor, for instance: the statement is incoherent. As Lewis wrote in *The Problem of Pain*:
>"His Omnipotence means power to do all that is intrinsically possible, not to do the intrinsically impossible. You may attribute miracles to Him, but not nonsense. This is no limit to His power. If you choose to say “God can give a creature free-will and at the same time withhold free-will from it,” you have not succeeded in saying anything about God: meaningless combinations of words do not suddenly acquire meaning simply because we prefix to them the two other words “God can”. It remains true that all things are possible with God: the intrinsic impossibilities are not things but nonentities. It is no more possible for God than for the weakest of His creatures to carry out both of two mutually exclusive alternatives; not because His power meets an obstacle, but because nonsense remains nonsense even when we talk it about God."
Now it would be quite possible for God to have created a universe without pain or suffering: He could have created a dead universe with no living things in it, or created no universe at all. So if we live in a universe with suffering it follows that God's goal is not to minimize suffering. There is something He wants that is worth suffering existing to get. Christians generally believe that this something is us: free willed intelligences capable of reason, love, etc. Though perhaps He wants something else even more, and undoubtedly He wants many things that we don't know. Liebniz would hold that this is the best possible world in which God can achieve all of His objectives. Because Christians believe that we are made in God's image, a universe where He achieves all his objectives is also the best possible universe for ourselves: at a fundamental level we were created to share the same utility function, as Eliezer might say.
So, to put it shortly, the metaphorical trolley in God's case is not whether to kill one man to save five, but whether to create a universe with humans in it and also suffering, or to create a universe with no humans and no suffering (or not create a universe at all).
Assuming you believe in heaven, a realm where trolleys cause no suffering, and there is no logical contradiction, you should account for why an omnipotent god prefers to maintain this trolley filled world rather than immediately set the world to the heavenly state.
Presumabely the only way to get heaven (or at least a heaven with people in it) is to first have trollies.
More specifically, in Christian theology there is a "place" called heaven where (there is some debate on this) the dearly departed exist with God in a blissful state: however, those dearly departed are waiting for Judgement Day. Christians believe that when Jesus returns he will overturn the world as it currently is, destroying it. Then comes the Judgement where God will judge both the quick (living) and the dead, separating the damned from the saints. Then the damned will be condemned to Hell (some say annihilated, most don't) while the saints will inherit the New Earth: the world recreated to be a paradise empty of trollies, where they shall live forever.
This matters! If Paradise is found when the world as we know it is destroyed and replaced with a better one, then it may very well be that this bad world we live in is a temporary necessity, required to create the conditions necessary for Paradise to exist at all. It would mean that the only way you can get free willed intelligences that are aligned with God's values to the point where they could exist in Paradise without ruining it, forever, is by first putting them through the trails of this world, and weeding out the once who are not willing or able to exist in Paradise without ruining it. Which is a gross simplification of the Christian idea of Atonement, Redemption, Damnation, etc. But I only have so much space.
I'm familiar with that objection, but I don't buy it at all. It seems like handwaving away a fatal objection to the God hypothesis. "Oh, the hypothesis is flatly contradicted by the data? Well, I'm sure there's some explanation somewhere that rescues the hypothesis. No, I can't find it. No, nobody else has found it either, but trust me, the Holocaust was actually a good thing!" Here, I agree with Lewis: nonsense remains nonsense even when we talk it about God, and I consider it nonsense to suggest that the Holocaust is good and that the world would not be better off if God had prevented it.
The idea that this world is the "best of all worlds" runs into two other severe problems. First, even we, puny humans, have managed to drastically improve the world with things like antibiotics, sewage systems, democracy, and electricity. If we can do it, how come God couldn't? Second:
"Now it would be quite possible for God to have created a universe without pain or suffering: He could have created a dead universe with no living things in it, or created no universe at all. "
...is contradicted by mainstream Christian theology itself, which asserts that God *did* create a universe that has living beings, but is nevertheless without pain or suffering. It's called "heaven". So if God did it once, why couldn't he have done it twice?
I'm pretty sure God uses neither, and that whatever goal he's pursuing is independent of simple human concepts like utility. It seems silly to even apply moral analysis to an omniscient omnipotent singular being. What're you going to do, tell the omniscient entity that it should know better?
An omnipotent being pursuing goals beyond your comprehension that have no relation to human values isn't God, it's Azathoth.
Three things:
1. "Beyond human comprehension" != "no relation to human values". If you don't like God's answer to the Trolley Problem, uh, settle down. Maybe He understands something you don't. God plays 4-d chess.
2. Giving an answer to the Trolley Problem that you happen not to like is a far cry from being a Lovecraftian Horror.
3. Even if God really IS a Lovecraftian Horror, what are you going to do about it anyway? Shame it? Omnipotent is Omnipotent. Call it God or call it Azathoth, you'd better get to worshipping either way.
You didn't say "maybe God is more moral than us" or "maybe we'd understand the morality of God's decisions if we had all the facts," you said morality *doesn't even apply* to God. You specifically ruled out both utilitarianism (God is good because his decisions lead to the best outcomes) and deontology (God is good because he follows moral principles). If neither of those is true, that leaves very little connection between God's values and humanity's.
(I'm not even sure what you mean by "a different answer to the trolley problem than you" because utilitarians and deontologists give exactly opposite answers to it.)
Your stance strikes me as a complete abdication of responsibility - morality is simply what God commands, no matter what horrible things he commands you to do. And sure, maybe that decision helps you avoid getting eaten by Azathoth, but I wouldn't call it "morality."
> If neither of those is true, that leaves very little connection between God's values and humanity's.
The third option you're leaving out is "God defines morality." It's then logically inconsistent to accuse God of being immoral. If He appears immoral to you, that's only because your understanding of morality is flawed.
>I'm not even sure what you mean by "a different answer to the trolley problem than you"
Less euphemistically that meant "action God could take that you would presume to criticize on moral grounds."
>morality is simply what God commands, no matter what horrible things he commands you to do
That's correct. If you presume the existence of God (which I don't, FWIW) then I don't see how one can arrive at any other conclusion. The Christian tradition certainly regards God as defining morality. If you admit the concept of Heaven as representing infinite utility, then you can pretty easily derive morality as being whatever it takes to get there, i.e. following God's will. And if God really turned out to be Azathoth then the only meaningful option available to us would be trying to not get eaten. In that case I suspect you _would_ call it morality.
Christians would hold that God's values are human values: or rather that human values are God's. That's kind of what the whole "made in His image" thing means.
Christians might say that, but Wanda was specifically saying the opposite - that God's values are not the same as human morality and we should not expect them to be.
I didn't say that at all. My point was that trying to apply human morality to God is a category error, like asking for the temperature of a single particle (which, if you're not a physics person, isn't a well-formed concept: temperature is only defined over ensembles).
I really need to understand why moving the substrate of intelligence from humans to machines gets rid of these problems. I know a dumb reason why people would assume it would but I do not understand the strong case.
I think Eliezer's point isn't that machines will naturally not have the same problems, but that they potentially could. He specifically writes that "in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree." I'd put the emphasis in that hypothetical more on the "lacking any inbuilt tendency to be corrupted by power" and less on the "AI".
The point being, human minds are "corrupted hardware" and only intelligences lacking corruption could be trusted to do the utilitarian calculation accurately. We humans do better by following deontological style rules that have been developed over trial and error for long periods of time.
If the one person was an AI and assented, I guess you'd be on more solid ground than usual with the trolley problem.
True! In Eliezer's hypothetical these AIs all naturally come to the same moral conclusions since they are non-corrupted intelligences: if such a scenario existed, you could assume any individual AI would consent to dying since all would agree on what the right action is.
How do you judge "worthy of personhood"? You can't judge it on "any entity willing to commit murder is not worthy" because your perfect AI is willing to kill.
I wouldn't think he means "worthy" in a moral sense, but in a technical one: if it's worth calling it a person.
I agree: it's probably best to take his hypothetical and strip the "A" off of "AI" altogether: in his view any non-corrupt Intelligence could be trusted to come to the correct utilitarian conclusion.
Here’s where I’m struggling:
If the AI is based on us, then it would inherit our prejudices. I don’t think there’s a single golden variable to be maximized that we can just code into it other than create a narratively satisfying world that feels authentic and that seems so nebulous that I don’t know if humans have any possible solution we could all accept.
If it’s not based on us… it has to evolve from external pressures but so did we. We always talk about ourselves like we are artificial but the whole of history had a hand in shaping us and I don’t see how that would produce a perfect artificial intelligence anymore than it could make a perfect biological one.
I do think superintelligence is likely and dangerous but I don’t think it can produce stuff that is free of trade offs.
That's the real nub of the question: being corrupt intelligences ourselves, how could we ever tell if another intelligence is non-corrupt? Of course I imagine Eliezer agrees with the concern, since he's the biggest proponent of the AI alignment problem out there. He definitely things that, at least right now, we can't produce a "Good" artificial intelligence.
Slow things down to the point you can understand them is the only meaningful answer I can come to from that problem.
I think the human propensity to rationalize bad behavior, to slip into seeing outgroups as less than human, etc. is a feature not a bug. These things are just the downside of things that allow us to function as well as we do: Tationalizing bad behavior is the downside of self-esteem management, without which we would sink into depression. Seeing the outgrouip as non-human and dispensable is the downside of chunking, which allows us to process complex situations rapidly. An AI modeled on US will have the same features.
I think any AI that has ability to do stuff we would care about is going to have those same trade offs whether based on us or not. This is why I don’t understand the strong case.
About 20 seconds into the "crypto" pitch, it becomes obvious from the very language used that it's a scam and a Ponzi scheme. That's when you check to make sure you still have your wallet, and keep walking.
If you choose to participate, the only question is, Will you be one of the few to benefit from the scheme, or will you be among the majority who lose?
Cash works. I always pay tips in cash, so unscrupulous business owners can't claim the server's gratuity as part of her or his lowly hourly wage (Sorry, Ronald Reagan).
I don't have a lot of sympathy for people who fail to use common sense.
FTX is not crypto.
What is it?
It's a crypto exchange.
> If you think you’re better at it than all the VCs, billionaires, and traders who trusted FTX - and better than all the competitors and hostile media outlets who tried to attack FTX on unrelated things while missing the actual disaster lurking below the surface - then please start a company, make $10 billion, and donate it to the victims of the last group of EAs who thought they were better at finance than everyone else in the world. Otherwise, please chill.
Is Scott serious about this? This is like "you're not President! How dare you criticize the President?" Or like those Mormons on Usenet who told me that it doesn't matter that Mormons hide their secret ceremonies because if I wanted to know about them I could always spend a couple of years being a Mormon. (Which incidentally would also mean I could get punished for criticizing them, which defeats the whole purpose of wanting to know about them.)
"You must become a billionaire yourself or you have no right to criticize a billionaire" is an awful, awful, take and as a poisoning the well fallacy is symptomatic of the problems that got you guys into this mess in the first place. (And even when X is easier to do than becoming a billionaire, "you must do X or you don't get to criticize X" is an awful take. I probably could become a Mormon, but I shouldn't have to in order to say there's something I don't like about Mormonism.)
I think I'm trying to say something more like - suppose you get some new vaccine endorsed by the CDC and WHO and your doctor, and also have your kids get the vaccine. Then later it turns out the vaccine caused cancer. Are you a moral monster for encouraging your kids to get it, or for getting it yourself and cutting off your own potential and ability to fulfill obligations? I think a common-sense answer is no - you did the reasonable thing by trusting the CDC and WHO and everyone who was more expert than you, and it's not your job to be an expert in medicine.
In the same way, I think charity recipients trusted that something endorsed by Blackrock and Temasek and Sequoia and the SEC was probably safe. Those are the experts, and so if you know that they approve of something, you don't have to do your own research.
If the charity recipient does think they're better than Blackrock and Temasek and so on, then they're probably among the top finance people in the world, and should put up or shut up.
Bit there is no expertise anywhere in the financial sector. It’s easy to make money in good times. Then the bad times come, they don’t predict it, they go bust or the government bails them out. Rinse. Repeat.
There is expertise, though. The average layperson couldn't do a JP Morgan market maker's job without either losing money or trading so little that they quickly get fired.
In general, people in finance will often be experts in "doing finance, conditional on black swan events not occurring".
Bit there is no expertise anywhere in the financial sector. It’s easy to make money in good times. Then the bad times come, they don’t predict it, they go bust or the government bails them out. Rinse. Repeat.
Do you think you'll update your priors that non-experts who avoided this mistake may have some traits worth understanding and copying? Specifically, that expert endorsement is not a substitution for understanding what you're getting into.
I can't help but notice that there was also a group of people who recently refused to get a vaccine that was endorsed by the CDC and WHO, and for healthy children it does appear that turning down the vaccine is/was the correct decision.
Which non-experts?
If you mean people who hate all crypto, I think I've probably made more money in crypto investing overall (even counting these losses) than they've made in whatever they're doing instead. You can always avoid all false positives by declaring everything a negative, but that's not very productive.
If there were people who were able to predict that eg Binance was an amazing investment but FTX was a bad one, then yes, I would love to hear what these people have to say and update on it.
I'm afraid I'm very late to this discussion but since this comment, as well as Mr. Dolittle's reply, connect with what I feel is an obvious point, I have decided to write anyway.
The point is not whether Binance would have been a better investment than FTX. The point is that everyone who makes billions in cryptocurrencies makes it not by providing useful goods for society but by taking other peoples money. I know that that is the name of the game, and that's the reason I'm not participating myself.
For me, the whole concept of taking other peoples money and using it for charity seem morally wrong and I have no idea why anyone could believe that SBF was doing a good thing, nor why they would accept donations from FTX.
I'll try to avoid making this a "ponzi scheme" point, but the reality must point somewhere in that direction. Because crypto currencies do not have any intrinsic value, they aren't like buying a stock as an investment and making money as we would normally think about it. If you bought Apple stock 20 years ago, you were investing in a company that used your funds to build and increase something of real value. The value of that stock, and therefore your investment, goes up together. Everyone who bought Apple stock went up together, and someone buying your Apple stock from you will have something of value as well. Even if the company went completely bankrupt, they have offices and tangible goods which can be sold off.
Bitcoin, or any crypto currency, is only valuable because other people are willing to buy it. Every single dollar a person would make selling crypto must necessarily come from someone else. So if you make a million dollars, one or more other people lost a million dollars. Now, if Bitcoin stays stable long term, that point gets obfuscated. Many people can buy and sell and not feel like they made a poor decision. But, there are still two problems. One, they can't keep making money forever. Early investors and people day trading can probably make some money, but overall equilibrium means that the average is essentially null - and those day traders are pairing off with others who are losing money. Two, if Bitcoin hits a real problem then unlike Apple or another tangible company, everyone who bought before will lose everything.
I don't hate crypto. I think there's a possibility for using them for purchasing goods and services that can be useful. They are not an investment. They should not be seen as an investment. They are modern day tulips.
Sorry, I forgot to refer to the experts more clearly. There are plenty of experts, the vast majority in fact, who are wary of crypto in general and would always steer people away from it.
Once you've made the jump to "crypto is okay, who should we choose?" you're going to get a lot less consensus. Without an underlying value, you need to develop some kind of trust-review process. Crypto is intentionally opaque, so it's just generally hard to do. Bitcoin has been around the longest, so I'd say it has the most built up trust, but it's history (despite being normal and stable compared to the overall crypto environment) is one of hacks, theft, and deliberate price manipulation.
You can say that people just hate on all crypto, but I feel like the onus is on the pro-crypto side to demonstrate why the vast majority of financial experts, who are on the "risky, best not try this" side of the equation, are in fact wrong.
The fact that you came out ahead participating in a Ponzi scheme is supposed to vindicate you and humble the critics who recognized it as such?
Please don't reopen the "is crypto a Ponzi scheme" debate here :(
That is indeed a reasonable point about pointing the finger of blame at "They should have known better, any child in the street could have seen the emperor had no clothes!"
On the other hand, it's also true that "You don't have to be a hen to criticise a bad egg". I don't blame people for trusting that Big Rich Enterprise is on the up-and-up, but there does come a point when you have to apply the same standards to "well I know and like these guys/they have a lot of the same beliefs and principles I do" when it comes to money as you do to "maybe First Third National Local Bapresbydist Church are a great bunch of people but Givewell says don't give money to their weekly collection for the mission in Botswana, give it to mosquito nets".
I think there's a relevant difference with the vaccine case, which is that by now humanity has gone through many more trial-and-error iterations of vaccine creation than of crypto industry wipeouts. If the medical profession was crap at making safe vaccines, we would have noticed and stopped taking vaccines. But if the finance profession is crap at figuring out whether the entire crypto industry will collapse within the first 20 years, how would we know?
I think the situation is more like - it's several hundred years ago and the consensus among smart people is that if you follow the bible you go to heaven when you die, should you follow the bible?
> I think a common-sense answer is no
But "common sense" is a fallacy (appeal to common sense), moral agents are supposed to know better. In exactly the same way as "I was just following orders" is not an excuse.
You do know some people doubted the CDC/WHO, and you do know some people questioned their orders in critical historical periods, just like some people questioned FTX claims.
"Hindsight is 20/20" is not an excuse, especially when some people did have the foresight that you for some reason did not.
I think you're eliding an important distinction between "outsourcing your knowledge of what is real" and "outsourcing your knowledge of what is moral".
It seems obvious to me that the former is less of a moral failure than the latter, partially because society's collective factual knowledge is far in excess of what a single person can hope to learn individually (while the same is not really true of morality; put three ethics professors in a room and you won't get better essays on ethics than any of them could produce alone), and partially because the failure states of the former tend to look very slightly less terrible than the failure states of the latter.
Failing to recognize what is real is not by itself a moral failure. But choosing to **act** on a wrong notion is.
For example mistaking thinking that a new vaccine will stop a pandemic is not by itself a moral failure. But using that notion to then violate deontological principles and suspend the constitutional rights of people who refuse to get that vaccine is **definitely** a moral failure.
Deontological principles exist precisely to avoid these kinds of moral failures. You should never blindly trust authority, especially when it's asking you to violate deontological principles. The end **never** justifies the means.
A moral agent is supposed to know that.
I mean, in a deontological system the "wrong" part of "choosing to act on a wrong notion" is mostly irrelevant. If you think suspending rights is never okay *even if* the vaccine would stop the pandemic, well, whether it would or wouldn't doesn't affect the correct decision.
More generally, while within consequentialism situational facts are more important, most of the *really*-bad outcomes need either a dodgy-to-begin-with morality or a reasonably-elaborate scenario; this is why I said the failure states of outsourcing factual knowledge are "very slightly less terrible" than those of outsourcing morality.
I also think that if you insist on absolute deontology *in emergencies* as a prerequisite of goodness, your set of Good People is going to be vanishingly small; I lie less often than ~anyone I know, but I *will* lie if I believe that telling the truth will get someone murdered. There are, shall we say, few deontologists in foxholes. This is why emergency powers exist in ~every country, despite their potential for abuse.
> f you think suspending rights is never okay *even if* the vaccine would stop the pandemic, well, whether it would or wouldn't doesn't affect the correct decision.
It should not affect the decision, but in reality it does, because people don't actually have principles.
If you are willing to abandon deontological principles in emergencies, then you have no deontological principles.
It's a rule that is not supposed to be broken, you break it, and it turns out it was the wrong decision, but it's OK because no one knew better (hindsight is 50/50), except some people did make the right decision, but maybe they made the right decision for the wrong reasons, except the reason was that they didn't break the rule that should not have been broken in the first place.
At what point do you accept that you are rationalizing your moral failure?
It's difficult but unavoidable that in these situations you have to be able to look back, with the benefit of hindsight, and ask "should I have known differently?" and sometimes the answer is no, even when you were wrong. For a clear example, imagine calling an all-in when you have a straight flush, and losing to a royal flush. You didn't do anything wrong, and learning any kind of lesson there would be a serious mistake.
So when you say "some people did have the foresight that you did not" I'm very skeptical. Did they? Just because they were right doesn't mean that they did. And what Scott wrote is a heuristic argument that they didn't.
I haven't looked much into SBF/FTX or their critics, so I don't know. Who do you think had this foresight, and on what basis? How are you determining that they were right for good reasons, and not just lucky?
I think these questions are very difficult, but asking them is the only way to improve
But are you really sure you didn't do anything wrong? Maybe the other guy cheated and your mistake was playing such kind of games. Immediately dismissing your mistakes is one of the worst things you could do.
At least think about it.
And yes, I know some people did have the foresight because I was one of those people. It's really astonishing how **today** I can make a prediction, a "rationalist" would dismiss the prediction, and then when the prediction comes true claim "nobody knew this could to happen, hindsight is 20/20".
Logic would dictate that if you got the prediction wrong you should adjust your priors and maybe listen to the people who got it right, not use "hindsight is 20/20" as an excuse.
Sure, I agree it's possible, and maybe worth looking at. I just mean it as an illustration of the kind of situation where you can get a bad result but made the right choices with what you knew. It's even possible that the other guy *was* cheating, but you had no way of knowing, and poker is still a game you make a lot of money at (with high expected EV in the future), so you still did nothing wrong. Won't always be true, just saying it's a thing that happens, and that you have to be ready for when dealing with risk
And okay, so the answer to "who" is "you". What about the basis for this prediction? What reasoning led you to think what particular things?
My whole thing here is just that someone having been right is not that strong of evidence for their process having been good (lots of people bet on both sides for lots of reasons, the results data is very sparse), so while I *do* think in the face of having been wrong you should think things over, I think you have to be comfy concluding "actually, I didn't fuck up at all." Which is something I think some people really struggle with, because it gives no feeling of resolution. (And, of course, which some other people do *way* too much).
But it means if you're just saying you were someone who got it right, but not saying why the process by which you did so was good, my response is mostly "so what?" Having been right is less than table stakes to me here.
Scott was saying a bit more than "sometimes you have to be comfy in concluding I didn't fuck up". He said that since the people who said it would die horribly were not billionaires, they don't count as people who knew it would die horribly.
Lots of people fail to be billionaires for reasons completely unrelated to their ability to detect bad ideas.
Also, Scott claimed:
>Listen: there’s a word for the activity of figuring out which financial entities are better or worse at their business than everyone else thinks, maximizing your exposure to the good ones, and minimizing your exposure to the bad ones. That word is “finance”.
This is illogical. It may be literally finance, but it's not a central example of finance. Not every part of finance is equally difficult, so someone could be able to figure out that some things have a high chance of fraud without being good at finance in general.
If you are going to dismiss all the instances in which you were wrong, and dismiss all the people who were right given the same information that you had at the time, why bother calling yourself "rationalist"?
You are just believing whatever makes you feel better about yourself and finding rationalizations to do so.
At what point do results matter?
I feel like this ties in to the "modest epistemology" debate.
Maybe all the finance organisations are consistently wrong in certain ways because they have similar internal cultures that lead them to consistently make the same mistakes. Nobody is capitalising on this obvious failure because all the people with money and the capacity to invest it are embedded in the same culture. They respect the financial experts and they all assume that if there were opportunities for profit somebody else would have taken them already.
It occurred to me earlier this year that the value of Bitcoin was probably going to go down, and there was probably some way for me to make money off this fact. But I don't really know how to do that and didn't care enough to follow it up. Given the amount of Bitcoin skeptics in the world, it's possibly weird that more people didn't see this coming and find a way to profit? My suspicion is that the kind of people who don't trust Bitcoin are not the kind of people who have investment strategies, and the kind of people who have investment strategies are the kind of people who trust Bitcoin.
I don't think Bitcoin value will evaporate to nothing, unless governments around the world really crack down on it. But gold seems to have everything going for it that Bitcoin has at the moment (except for being physical, and that's a two-way street).
My general assessment is (and was before the FTX collapse):
Bitcoin? a bit dodgy
Bitcoin derivatives? run for the hills
That analogy would be more persuasive if the "experts" in the real case were more akin to the staid and cautious CDC and WHO and doctor experts in your analogy.
But VC firms are hardly that. They are in the business of high risk high return investments. They roll dice all the time, and invest in stuff that has a 98% chance of failure and 2% chance of glory routinely. The fact that they invest in something is not a great endorsement of it being a "safe" investment, indeed I would think the more common sense interpretation would be that it is not safe at all -- that it is one of those risky bets that only people who love playing cards in Vegas for big stakes would happily place. If you wanted a safe bet, you'd look at where the University of California invests its pension fund, not a VC firm.
I think it's disingenuous to focus on "VC firms" when Blackrock, Temasek, and the SEC were equally involved.
Why do you keep implying that the SEC endorsed FTX? Did they ever go "yep, we audited FTX and it's all legit?" The best that could be said is that they hadn't prosecuted him yet.
Yeah, I was going to make that point in my original longer post, so thank you for doing so. At best the SEC is like a police detective, referring for prosecution crimes that are brought to its attention. Equating "hasn't yet been fined or referred for prosecution by the SEC" to "financially on the up-and-up" is like assuming that because your neighbor hasn't yet been arrested for theft it's OK to invest in his multilevel marketing scheme.
Well, the SEC is another level of entanglement. Apparently (or going by all the Twitter action around this) Bankman-Fried's father knew an official in the SEC who then took the advice of Bankman-Fried himself on regulations that should apply:
https://cryptonews.com/news/was-sec-chair-gary-gensler-helping-sam-bankman-fried-find-legal-loopholes-for-ftx-heres-what-you-need-to-know.htm
This is part of the whole mess, the people involved were connected via their families to a lot of the infrastructure, for lack of a better word, so they maybe got favourable treatment/didn't have to jump through the same hoops as people without those connections. Or simply that they thought they knew better because they'd grown up in families with all this involvement, so they knew what could go wrong and how to get around things and make money the easy way.
I agree with L50L on this one. You would expect the SEC to consult with leaders of the industry on which it is pondering regulation, to do otherwise would be more than unusually ignorant on the part of government, to ignore experience and expertise they need. Likewise the FDA consults with physicians and drug companies.
Whether that consulting ends up corrupt is another story, but unfortunately one that is much more difficult to extract, which is why news media on deadlines usually run with Caesar's wife stories instead.
Why *wouldn't* the SEC consult leaders in the crypto space when developing regulations? This seems like a good thing!
And this still doesn't support Scott's implication that the SEC ever endorsed the state of FTX's finances.
I had written a longer post including the reasons why analogizing the others to a conservative guardian certifying agency was equally dubious, but decided to condense it and settle for impeaching just one of them in the interests of cogency.
And I'm not really following the argument that they have to be impeached all at once or not at all. You made no distinction among them yourself, so the impression I got from the analogy was that you thought they were all equally equivalent to the CDC. If I establish that even one of them really wasn't, then I think that calls into question the aptness of your analogy ipso facto.
I'm not saying they're exactly equivalent to the CDC. I'm saying something like -
All of us make a bunch of assumptions going about our daily lives. When I write this Substack, I assume Substack Inc will pay me the money I'm owed instead of fudging my subscriber numbers to keep most of it for themselves. When you post on here, you're assuming I won't seed my posts with links to malware that will install on your computer and steal your credit card details. If you're using Windows, you're assuming *Microsoft* isn't stealing your credit card details. We all make these assumptions because it would be impossible to go through life otherwise - if I have to personally read every line of Windows code before trusting Microsoft, I would never have time for anything else.
We do this by vague common-sensical trust networks. I trust Microsoft because the laptop companies trust it enough to include on their laptops, the tech magazines trust it enough to not have NEVER USE MICROSOFT in big letters on every issue, and in general Microsoft is so big and widely used that if they were doing insane evil things I would have heard about it. If Microsoft does turn out to be secretly stealing my credit card details, I think it's fair to say I am a real victim, rather than that I'm partially culpable for not doing all the relevant research and poring-over-code myself.
In the same way, the fact that FTX was very big and all the big companies invested in it seemed like a common sense trust network such that I could deal with them without having to learn financial analysis myself and go through their books or whatever. Common sense trust networks aren't exactly like official FDA approval of a vaccine or something, but I do think they're like eg my doctor agreeing a vaccine is good, or the CDC saying vaguely positive things about a vaccine, and I think the same considerations apply.
Yes, I grasped the nature of the analogy the first time. I'm just pointing out the "trust network" agents in the real case aren't actually that similar to the agents in your analogy. VC firms *and* (so we don't get into this red herring again) several known to be adventurous investment firms grappling with shitty returns in the ordinary market (betting amounts they could easily afford to lose) plus the SEC do not add up to anything equivalent to "the watchdog CDC endorses use of this vaccine" or "80% of the world uses MS Windows as an OS and every online PC/laptop review magazine endorses products using it."
And I'm only making the point because the analogy seems part of a mildly defensive attempt to rationalize why trusting this particular trust network wasn't really a mistake. But certainly nobody here is hating on you for the mistake. Why feel defensive at all? If nothing else, you're clearly in good company. It would be much more interesting to analyze why you (and others) made the mistake in the first place. We all make mistakes like that, and your writing often shines most impressively when it tackles subjects of this nature: here's a mistake I made, and which people commonly make, and let's think deeply about why this happens.
"If you're using Windows, you're assuming *Microsoft* isn't stealing your credit card details."
I only assume that because I assume Microsoft would like to, but make such a mess of even simple updates to their software that they can't figure out how to get it to work.
See the roll-out of Windows 11 and their list of "if your PC/laptop doesn't have one of this set of processors, it isn't eligible for 11". It was transparently obvious they wanted to sell a ton of Surface laptops/tablets on the back of this (the links they kept sending me about 'buy a new machine to update!' were all for their Surfaces) and I'm not going to buy a new PC just to get Windows 11. Never mind that I dislike laptops and tablets and won't use one if I can possibly avoid it.
So ever since I've been using Windows, I updated to the latest version (yes, I suffered with Vista like everyone else) *except* for this one, and that's because I am not going to pay Microsoft more than I have to. 10 works fine and does what I want, why will I buy what I don't need?
(Ask me about their "Software as a Service" subscription model which we're signed up to at work, which I think is absurd for 'hey if you want to use Word, instead of a once-off purchase now you have to pay us for eternity'.)
All companies are evil and want to gouge money out of you. I don't trust *any* of them.
I think there's a big gap between this and what you actually said.
If someone told me that Microsoft was doing something nefarious, my reaction would *not* be "come back to me when you've started a company as big as Microsoft".
Hm, I get your point, but I think there is sth. missing. Suppose your kid is ill, and for whatever reasons many in your community strongly belief in alternative medicine, which has healed many who are around. And so you take your kid to the best healers around, all those persons with the highest alternative medicine credentials, to then discover in the end that they couldn't heal your child. From your point of view, those were the experts. From somebody elses point of view, they were not to trust with your kid's health, because the whole system they rely on is fraud, or, for better comparison, is at least imperfect with regard to certain illness. Maybe it does a good service with, let's say, preventing and reversing lifestyle illnesses early on, but it doesn't do a good service with healing cancer. In this example, the key is not to become a better alternative healer before being bitter about your mistake, it's to realize that this whole sub-sector never was the right place to seek the only or most relevant advice on your child's health. Even though it gave your neighbours helpful advice on nutrition and everyday activities, that resulted in them being much fitter.
The non-profit/ charities' criticims of those who would never have trusted FTX doesn't come from a place of 'I'm a better financial expert than those at Blackrock' or 'why did you trust Blackrock over (their closest competitor being slightly more sceptical of FTX)'. At least in parts it comes from a place of 'why did you think that specific sub-sector which we know is prone to fall for hypes, which is into high risk, high profit, often sees big losses, and likes to play with money that doesn't represent concrete values is the best place to rely on when thinking about funding for your charity?' I think in this case 'be a better billionare or shut up' is misguided.
Just to be clear, my intention is not to argue against VC, billionares or traders, I just disagree with 'as long as you can't do better, be silent'.
I guess your initial point was: don't beat yourself up too much over a wrong decision, and I actually agree with this one. But I think one needs to find a better reason for that, which maybe is just to say that everybody is making mistakes, that's human, and sometimes those mistakes can have costly consequences. In this case, many others made a similar mistake. Making a mistake, even a costly one, doesn't mean you're a moral monster.
I'd probably also argue to seriously reflect on this mistake and lessons learned, rather than saying 'ah, all the experts were wrong as well, so whatever, let's continue as always'.
I think in this case you didn't make a medical-knowledge failure, you made a meta-level failure in determining which supposed-experts to trust.
Obviously figuring out which experts to trust is hard, and sometimes "go with the most prestigious ones" goes badly, but I think it's the best heuristic most people who aren't experts themselves have, and people who follow it mostly can't be faulted. Our hypothetical alternative medicine mom is blameworthy insofar as she rejected the consensus prestigious experts for non-consensus non-prestigious experts.
I think Blackrock, the SEC, etc, *are* the most prestigious consensus experts in the financial world. If it was a failure to trust them instead of someone else, I still don't know who that someone else would be. I don't think there was some other expert affirmatively saying "FTX is bad!" (and I don't think people who hate all crypto on principle but didn't identify any specific issues with FTX should count)
I know (online acquaintance) somebody with a PhD in molecular genetics, a real genuine physical syndrome, and who believes in Reiki healing that has improved their condition and is now offering to do Reiki healing over the phone/internet for people.
Do I 'trust' "well they've got a PhD in a biological science, I don't even have a basic degree in anything, they are clearly the expert here"?
It could be worse. Remember "I'm not a doctor, but I play one on TV...now here's some medical advice...?" That stuff actually works. We're amazingly gullible as a species. It's probably one reason we are also so tribal -- we need *some* defense from the risks of our individual gullibility, and joining one of Scott's "trust networks" can help with that.
>I don't think people who hate all crypto on principle but didn't identify any specific issues with FTX should count
I don't see why - obviously, that's fair that they didn't say "SBF is a fraud but CZ is fine" and didn't predict that FTX was going to fall and Binance be left standing. But also, they didn't claim to predict which specific crypto businesses would collapse in what order - they just claim that all of them will and that all of the money invested in crypto will be a dead loss because the entire business is fundamentally a Ponzi scheme. Given how many crypto businesses have failed, I'm updating strongly to "David Gerard is right".
This also explains why they aren't making money - creating a synthetic put option against the entire sector would be hard, and the margin would be so big that (as Keynes put it) the market can easily stay irrational longer than you can stay solvent. Moreover, the counterparties of that transaction are crypto investors; it's very hard to see how they would have the ability to pay out if you are right, since they are likely to go broke. What you would need is something like a credit default swap backed by a highly-diversified financial institution that has some exposure to crypto (if it has zero exposure, then it wouldn't be issuing those CDSes) but not so much that it would be at default risk if the sector went belly-up. But Goldman Sachs aren't issuing crypto CDSes.
[To simplify the above: I can't bet that "you're going to go broke" because if I win the bet, you can't pay out, so I have to bet with someone else that "he's going to go broke" and I have to be sure that you going broke won't bring them down too; the people I trust not to go broke if crypto crashes are not taking that bet]
It seems to me that there are two positions to update in favour of:
First, crypto is fundamentally a problem, anyone involved in it is not to be trusted. This is comparable to accepting charity funding from other sectors that are unethical, like casinos or tobacco or coal.
Second, any business that is making an especially big deal about its ethics and charity giving is more likely to be unethical/corrupt in its business practices. This is the old "mobsters give to the local church/school" idea writ large.
There's a much better and simpler heuristic: don't trust anyone (unless you have a good reason to (and prestige isn't a good reason)).
And yes, trusting somebody you didn't have a good reason to trust is your fault.
I think this is a reasonable heuristic in the big picture, but that it is being misapplied in this particular case. Big financial firms have a different purpose and live in a different ecosystem than big medical firms. The environments can't be compared to each other. Big financial firms are legendarily well known for not caring whether their counterparties are trustworthy, or honest, or are going to survive beyond the very short firm. Their collaboration says exactly and only "this is going to make money for me". Any other consequences and participants can go sex act themselves. You only need to have read one Michael Lewis book to understand this. Big medical firms have their well being and survivability tied to solving problems for the population at large and big financial firms absolutely do not. Blackrock making money and cooperating with something is no basis for assuming that anyone else will make money, or not be scammed, or that their partners are not criminals. No one in that industry cares about any of that. That's where most of the margin comes from. The mafia is an expert in smuggling but they are not a trustworthy source of information about the topic. The FSB is an expert in the Russian security state but they are not trustworthy sources of information about it. Etc. If you've missed the existence of this entire category, that's not good.
Also, it's really aggravating that you keep bringing up the SEC - you have no basis for doing so. The link you cite makes the following argument: "The offshore crypto exchange to which US law does not apply was not prevented by the SEC via unicorn powers from committing fraud. With that as evidence, plus the evidence of anonymous people and from extremely trustworthy Republican politicians making accusations that they totally always have a good basis for, definitely maybe SEC was in cahoots with FTX, after all they have held meetings, or maybe not but definitely a "bad look".
It's an offensively poor and non-empirical argument, suggesting that you don't know much about the processes and practices of the SEC, the legal, regulatory, or habitual constraints under which they operate, or even the basics of what they have or have not done concerning FTX or the crypto industry, and even worse, you don't even know what you don't know and that you don't know it. But claims that "the SEC endorsed the bad guys" .. are a very convenient thing for a lot of people to claim/believe right now for obvious reasons.
> I think in this case you didn't make a medical-knowledge failure, you made a meta-level failure in determining which supposed-experts to trust.
Yes, that's a fair summary of my point.
> I don't think there was some other expert affirmatively saying "FTX is bad!
I'm not expert enough to give you a lot of detail here. But if I was to go to any financial expert close-by, and say: I need money for altruistic cause x, and I want something that gives me more money rather than less money, but I also want something that is low risk, because (all kinds of reasons)' I kind of doubt FTX would have been the advice of the day.
I don't think anyone was arguing from first principles that FTX was the best company to get money from. I think FTX was offering people money, and you could either say yes and have FTX money, or say no and have zero money. I think the bar for accepting an offer like this is pretty low - basically just "not so fraudulent that it would be offensive to their victims to accept" - and that it was very fair to be surprised when in the end FTX failed to clear that bar.
I agree with most of that except the last sentence. It should have been very clear to anyone that FTC had a decent chance of failure. Especially after JUN.
You can say you don't like Mormonism, but if you say after a Mormon sex scandal "how could anyone have missed the red flags? It was so obvious!", then people have the right to not believe you. After all, if they were so obvious, why didn't any of the many critics of Mormonism see them? Why didn't the police see them?
This seems like a pretty transparent misreading of the article?
This section wasn't about not blaming FTX, it was about not blaming non-expert individuals who didn't notice the problem with FTX sooner.
> "This is just rule utilitarianism"
Actually, that's not quite right. It's multi-level act consequentialism. The difference is explained here:
https://www.utilitarianism.net/types-of-utilitarianism#multi-level-utilitarianism-versus-single-level-utilitarianism
My latest post gives more of a breakdown of the different alternatives to *naive* (single-level) act consequentialism:
https://rychappell.substack.com/p/naive-vs-prudent-utilitarianism
Your Mistakes disclaimer, the opening statement; I don't promise never to make mistakes. Grammar and sentence structure. You never promise to make mistakes. Correction of a double negative should read: I don't promise to never make mistakes. Trivial? yes, but totally changes the meaning.
That is the joke
This is cutting against the grain of this thread, but does anyone here still take COVID seriously, or are all of you basically over the pandemic? My brother and his girlfriend still mask up when going to certain places, and they also got the latest booster, and my corner drugstore and my parent's cafe still requires masking, but otherwise, I rarely see people masked up. Am I right in assuming it's pointless to care about COVID still?
I have resumed wearing mask on public transport for the winter season. Not really because I am concerned about Covid specifically, but rather because I think it is a good general hygiene standard.
And when I get cold symptoms I test for Covid. I plan to isolate a little more strict with Covid than with other colds, but not much more. (Isolating is no longer mandatory in my country even with Covid).
I'm a "vax-and-forget"-er all the way, my man. Used a N95 on one occasion when meeting with some elderly relatives last month.
I pretty much stopped caring about COVID back in February, but I wore an n95 when flying at my parent's insistence and I may decide to get the latest booster just in case (mainly to protect relatives when traveling for the holidays, rather than myself).
I have a risk factor that makes my risk approx equivalent to that of a man in his 70's. I go where I like, but mask in indoor public places, get the boosters, and run a big air purifier in my office. The degree of risk of my getting very sick with a respiratory illness is going to wax and wane as flu, covid and RSV cases do, but it's not low enough to shrug off. If I were 35 with no risk factors I think it would be. The other concern I have is Long Covid. I'm sure a lot of things called Long Covid are just slower-than-average recoveries, symptoms that in fact having nothing to do with covid, neurosis, malingering etc. But I'm pretty positive that does not account for all the Long Covid cases. I had what I'm pretty sure was a post viral syndrome myself 20 years ago and it absolutely ruined 3 years of my life. I never want to go through that again. Just for the record, I am not indignant that most other people are not masking.
Mostly my behavior has returned to pre-pandemic norms, and my comfort level is squarely there.
The only exceptions to that would be one-offs like if I was exposed, I'd stay in and wear a mask to go out until I could reliably test negative, or if I have a friend who is uncomfortable for some reason I'd mask, hang out outside, or whatever.
I got vaxxed, and got a booster last December. Since things went back to normal in Ireland about 6 months ago, I have too. Pharmacies used to ask you to mask, but now even they don't. I was drinking at the bar in my local pub tonight, and have been frequently.
I'm not convinced by the booster thing at this point - I think if you are working in public health or something maybe it is useful. But it doesn't prevent infection, and other than reduce the chances of infection for a couple of months. all it does is give your long-term immunity a prod IMO. Encountering virus in the wild probably does that just as well. I don't think it would hurt to get another booster this Christmas, and if my family fret about it I probably will. But I am basically treating Covid as just another endemic virus.
As far as I know I haven't got it, but I quite likely have had it asymptomatically. Then again my sister had it a few months ago for the first time (her son who lives with her got it previously but she wasn't infected). She said it was nasty, but she was over it within a week.
I mask if I'm sick (and probably contagious), because that is one of the few cultural changes I'm hoping lasts after the pandemic. Otherwise I don't worry about COVID beyond what I have to with regards to my family's jobs (my mother works in healthcare, so I get tested before she comes to visit if I might be sick)
I'm taking it seriously in the sense of getting new boosters when they come out, but at this point I think that's about all it is reasonable to do if you're not in a very at-risk population.
I'm still avoiding gatherings as much as practical, wearing an n95 when it's not, not eating indoors, etc. I expect to do that until:
a) I'm convinced that long Covid odds are well under 1% per infection, ideally under 0.01%. (Currently the studies, which are all flawed in various ways, more research needed, etc. etc., seem to point to closer to 1 in six, with 1 in 20 about as low as it gets. While vaccination helps it seems to be more on the order of 40-50% reduction-- maybe less, maybe a bit more, not vastly more-- than my preferred .999...)
I'll be very glad to see a persuasive study that shows it to be much lower, and due to the low quality of the existing data that's not ruled out. But thus far I'm still waiting.
Advances in prevention and treatment would also work, of course. But we're no longer doing Warp Speed-type programs and even funding for existing research has been largely blocked by budget struggles. (The administration is going to try again in the lame duck, but I imagine that it will fail again.)
So I expect that to be back to the usual timeline for new drugs, i.e. not getting to approval any time soon, even if there's something to approve. (And I've read Derek Lowe for enough years to know how many candidates there are for each drug that works.)
b) prevalence is low enough that it's reasonable not to expect to be infected once or twice a year in the absence of precautions. I don't go out of my way this much to avoid flu because 1) it doesn't seem to have anywhere near the same rate of sequelae, and 2) flu prevalence times infectiousness meant I could go 5-10 years without getting the flu. That's clearly not currently the case with Covid.
c) the expected seriousness of long Covid is assessed to be lower than it currently looks. (I'm concerned both about life-changing but not immediately deadly things like long term fatigue or permanent anosmia, and actual life-threatening issues like greatly increased cardiovascular risks in the years following.)
d) social/economic pressure makes that unsustainable.
(Or, I guess, e) I actually decide to stop worrying and love repeated SARS-Cov-2 infections, but that seems less likely.)
Even if I go "back to normal" in some sense, I still don't expect I'll ever, e.g., fly without a mask again. I routinely got colds and worse from flying and I'd be just as happy not to go back to that even if Covid risk drops below my threshold.
I got the bivalent booster some weeks back. I am considering masking up in the most crowded environments I frequent, namely the subway and the supermarket.
n95 masks help substantially (not as much as I'd like, significantly more than zero), and I don't find them all that uncomfortable. YMMV.
For those looking for comfortable ones, I like Kimberly-Clark's duck masks. 3M's Aura models also get good ratings for both comfort and filtering capacity.
Don't order from Amazon. They're full of counterfeits. (Or were last time I was buying from them.) This seems to be a general problem with filtration products for them-- I stopped buying water filters there a few years back after I noticed that I was getting fakes (that weighed half as much as the genuine article).
One thing that perplexes me is the continued prevalence of cloth and surgical masks. In 2020, sure: there was a shortage of n95s, and something was better than nothing (more true pre-omicron). And a lot of people were wearing the minimum they could get away with to comply with mask requirements anyway.
Now no one is required to wear a mask if they don't choose to (at least most places-- I think my health care system still requires them), and there's no longer a shortage. So I'd expect things to have sorted into no mask ("No mask? No mask!") and n95/KN95/etc. Instead, I still see, among the minority of mask-wearers, a fair number of cloth masks that are probably very porous and minimally protective, and surgical masks that leak out the sides.
(And that I at least found much less comfortable than an n95. Surgical masks were sweatier, and I never found one that didn't fog my glasses. Having the seal and space to breathe an n95 offers is a huge improvement!)
Maybe it's cost. But it doesn't really seem to correlate with income. (And it's possible to make n95s last if one is motivated-- buy seven and rotate them daily till they're visibly dirty or damaged.) And KN95s especially are pretty cheap, though I was never able to get a good seal with one.
"The King In Yellow" was definitely a play about covid. That's my headcanon now. Hastur is pleased.
Working in grocery, I've also remained baffled by the prevalence of barrel-bottom masks, often in addition to nose-out wearing. Have always wanted to see what's in someone's head when they run that kind of heuristic. What sort of strange information did they get about covid? Is it just a "better than nothing" rationalization? A virtue signal? (The strangest is still occasionally seeing people trying to make do with, like, bandannas or pulling their shirt collar up...) Sometimes wonder if it's just a collective-action problem, and if there was a loud enough PSA that "Actually Most People Won't Judge You Weird For Not Masking", perhaps a significant fraction would sigh in social relief. That's the only reason I still do it, on occasion: not wanting to ruffle any tribal feathers. Beware Trivial Social Inconveniences.
(Conversely, some people enjoy the social acceptability of masks for convenience - they don't like having facial expressions read, do like the reduced makeup requirements, whatever. I'm sympathetic to such people. Still seems like that wouldn't account for a huge fraction of maskers though.)
I see what you did there 😀
"Stranger: I wear no mask.
Camilla: (Terrified, aside to Cassilda) No mask? No mask!"
Yes, agreed! I don't understand this phenomenon, and I very, very much don't understand how we *still* have people wearing masks with their nose sticking out in places where there's no requirement to mask at all.
I can only conclude it's just a fundamental lack of information, except I also don't understand that, because if you think the risk is high enough it's worth ameliorating, how do you then think "but it's not worth researching for five seconds to find out how to effectively do that"?
Oh come on! LHN didn't say or imply anything remotely like they're delighted to do it. Nor did they say they planned to do it the rest of their life. And you & I both know that if LHN is masking it's because they are concerned that covid for them would be something worse than sniffles. It's a lowdown, unfair tactic to misstate what LHN said while sneering at the absurdity of the distorted version you present. It's as though I responded to your post this way: "Trebuchet's take is that the world has a right to see their glamorous face 24-7, and that their courage and realism about covid are as rare & impressive as a perfect LSAT. Lord save me from these egomaniacs!"
Covid isn't the sniffles, as the 2300 Americans dead of it this past week could attest. But at this point no one's making you protect yourself or anyone else against it.
As for understanding: Maybe we assess the risk of Covid differently. Maybe we assess the burden of masking differently. I'd guess probably both.
The recommendations were for those who are still interested in comfortable, effective masks at least some of the time. Which by my observation is a minority, but not vanishingly small fraction of the population.
Hm, It's arguably worth getting an omicron booster, especially if you're in a vulnerable population, but aside from that yeah pretty much (unless you're severely immunocompromised, but then you'd have to take extreme measures to avoid flu anyway).
It's a huge exaggeration to say there's nothing you can do to avoid it without spending your entire existence in self-imposed lockdown. I go to stores, movies, etc., but mask in these indoor public places. I go to work, but run a bit big purifier in the office in lieu of wearing a mask and asking the people I meet with to mask. Before parties & the like my friends and I test. That is very far from self-imposed lockdown, & so far it has worked to keep me from getting covid. Of course I am aware that I may still get it, but the point of my precautions isn't to guarantee that I never get it -- it's to minimize the number of times I get it. Zero is my preferred number, but if my precautions mean I get it once, rather than 3 or 4 times, that also seems like a worthwhile goal, worth the trouble I'm taking (which in total is maybe 2 hrs per week of wearing a mask, testing once a month or so, and flicking the switch on the air purifier when I arrive at work).
>All of the stuff we did besides the vax was a giant waste of time and money
You're ignoring the period of time early in the pandemic when the medical systems were being genuinely overwhelmed, there weren't enough ventilators or even beds to go around, and "flatten the curve" was the (even in hindsight, correct) Narrative. Merely slowing transmission made sense as one of the terminal goals.
(There's some debate as to whether the US could have done better with a faster/stronger response, or whether it was naive to expect that to ever work when partisans were creating a low-trust cooperate-defect scenario, but the above isn't predicated on that)
Everything after the booster, though, much less clear that it wasn't a waste of money.
Agreed. Flatten the curve well after the curve was flattened was a classic mission creep.
The country got caught up in the dream that we could eliminate covid, the way we have various other disease. At the beginning, I bought the idea that that was theoretically possible, though I didn't see how we could do it in practice -- locking everything down for a few months seemed like it would do terrible damage to the economy and to a lot of individuals. I now understand that covid just isn't the kind of disease you can eradicate the way you do polio, and I'm sure docs & epidemiologists at the alphabet agencies realized that from the beginning. Why didn't somebody in government say so? Why didn't somebody come up with a plan that optimized our chance of having the best outcome with this damn disease, given that eradication was not possible?
>I'm sure docs & epidemiologists at the alphabet agencies realized that from the beginning.
Other variants of SARS and MERS were successfully contained. It doubt it was immediately clear just how much more difficult SARS-CoV-2 was in that regard.
" Why didn't somebody come up with a plan that optimized our chance of having the best outcome with this damn disease, given that eradication was not possible?"
They probably had one, but it failed on the critical item: the populace must go along with it.
Sweden was one of the few countries where enough of the population didn't successfully protest (cue politicians jumping in) the state epidemiologist's original plan.
In retrospect, should it have been a red flag that FTX didn't buy a billion malaria nets and distribute them in Africa?
EDIT: Aka, should it have been a red flag that an entity claiming to be Effectively Altruist was only doing high status effective altruist activities not low status but effectively effective effective altruist activities?
One can make some convincing arguments that AI safety research has OOMs more cost effectiveness than bed nets.
Maybe, but as Scott points out FTX was acting like it was idea-constrained not funding constrained, and it still put $0 into bednets. Maybe "AI safety" is so much better than bednets that you fund all AI Safety ideas first before buying a single bednet, but FTX was spraying money everywhere.
Actually yeah, I'd assumed they put at least some of their money in that but looks like they weren't? This one is a retroactive yellow flag.
> Like many of you, I’ve been following the FTX disaster.
What is/was FTX?
Out of sympathy, here is the very short, very vaguely accurate cliff notes summery:
FTX was a very successful cryptocurrency exchange run by a man who was very connected to Effective Altruism. As an exchange, it was supposed to hold people's cryptocurrency and make money off of fees when exchanging from one cryptocurrency to another. It turns out that instead of holding on to people's money, they were using that money to speculate on cryptocurrency. Also there was a lot of financially shady stuff going on that is too complicated to summarize, but most financial people agree was a deceptive attempt to make a particular cryptocurrency that FTX controls look better than it was. Another successful cryptocurrency exchange in competition with FTX realized they were puffing up their cryptocurrency, and did some maneuvers to cause the price to drop really far really fast. As part of this many people wanted to withdraw their money from FTX, and then FTX stopped letting people withdraw their money: this revealed that they didn't actually have the deposits because they had been spending them on speculative investments, which they were not given permission to do. FTX is collapsing as a company, and a bunch of people lost the cryptocurrency they had deposited with them.
This has led a lot of Effective Altruism people to say publicly "Don't steal people's money in an attempt to make even more money: it's wrong, even if you planned to use the money you made to make the world a better place."
Sounds like Google is your mortal enemy
If I search "what is FTX" none of the results are as good as what FLWAB posted.
The results I get are (in order):
* the FTX official website.
* the wikipedia article
* a bunch of articles about the collapse of FTX. When I spot-check these, they are written as if I already knew what FTX is/was.
I don't know about that but they are a very dubious company one would do well to avoid.
I created a prediction market for whether any FTXFF grantee will be legally compelled to return money due to FTX's bankruptcy. I think it's unlikely. https://manifold.markets/JonathanRay/will-any-ftxff-grantee-be-legally-c
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America. You, a random AI researcher who tried Googling “who are these people and why are they giving me money” before accepting a $5,000 FTX grant, don’t need to feel guilty for not singlehandedly blowing the lid off this conspiracy. This is true even if a bunch of pundits who fawned over FTX on its way up have pivoted to posting screenshots of every sketchy thing they ever did and saying “Look at all the red flags!”
Scott,
I very rarely comment here, but I follow you voluntarily. I don't think you're a bad guy, I've learned some interesting things from you. But this reply is really a joke, I'm sorry. I'm a random well-educated liberal, and it's been wildly obvious to me that FTX was a Ponzi scheme for years, and more importantly, not just me, but a thriving crypto-skeptic community.
https://twitter.com/Bitfinexed?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
You'll see right in the biography that this guy's been covered by the MSM for years. Ever since Mt. Gox blew up, there has been a super-abundance of critical analysis of crypto as a giant scam.
https://davidgerard.co.uk/blockchain/2021/09/07/el-salvador-bitcoin-day-how-it-went/
I'm just posting random links I used in emails years ago. Here, this was easy to find from 2018:
https://www.cnbc.com/2019/11/04/study-single-anonymous-market-manipulator-pushed-bitcoin-to-20000.html
Does this sound like a trustworthy basis for assigning financial value? No, it does not. This is CNBC.com - I am not deep diving here.
I am neutral on your point as to whether NGOs should feel *bad* about taking money from a criminal. They were presumably using the money to do good, and it's easy to get confused and not know if and when the scammer was crossing the line from unethical lying and cheating to criminal behavior. That's an individual ethical decision. But I guarantee you that the Democratic party, ProPublica, and the SEC were extremely aware that SBF was an untrustworthy scammer, although they may not have all known he was crossing into criminal behavior.
The details of how SBF appears to have committed fraud were not obvious and well known, but crypto was readily knowable as a Ponzi scheme that was consistently bringing ruin to naive people. You absolutely could have done better due diligence to understand that, and so could any NGO who wanted to understand with an hour of research. Of course, it's easy to do research badly and not realize that you have done a bad job, so I'm not personally scorning anyone who was rugged, but those people absolutely should hold themselves accountable for mucking up something not overly difficult.
You don't need a prediction market, you just need a reasonably diverse base for information intake and a willingness to take adverse information seriously. Crypto exchanges have been blowing up on the regular for a decade, all the info was there in plain sight.
Scott is guilty of being human. This is a friend-of-a-friend kind of thing, if people he knew were saying that people they knew said these guys were okay, why would he doubt them?
It's also the technocratic strain in Rationalism and EA that believes anything done with modern advanced high-tech methods has to be so much more efficient and better. I'm suspicious of technocracy so crypto always sounded to me like a very elaborate way to get scammed, especially when some people were enthusing about how it was untrackable and you could safely buy your guns/drugs/illegal but shouldn't be stuff with it.
However, it's always very hard for people to believe that others who share (or seem to share) the same general beliefs as they do, and move in the same circles, and are involved in the same good causes, can be up to no good. This may not be the strawman 100% rationalist who takes nothing on trust and always runs tests on if they should trust their spouse when their spouse says they love them, but it's a lot more human and a lot more personable.
You are overstating your case (just like many commenters before you); crypto is not ENTIRELY a scam.
BUT, more importantly, it is an obviously sketchy industry, just like, say, personal development advice ("this book will change your life!") or medical supplements or something like that.
Any "crypto-billionare" should be automatically viewed with suspicion unless proven otherwise (and in my eyes, number of crypto moguls who proven themselves beyond suspicion is so far exactly 0). Failure to see that does indeed seem like significant error of judgment.
>I'm a random well-educated liberal, and it's been wildly obvious to me that FTX was a Ponzi scheme for years, and more importantly, not just me, but a thriving crypto-skeptic community.
Yes, but there are probably 5 other things you think are obviously Ponzi schemes (or otherwise criminal/corrupt) that aren't.
Of course for any controversial claim, there will be people who believe it and people who dispute it. When facts come out that prove one side right, everyone in that side will get to crow about how 'obvious' it was the whole time and get to feel superior to everyone on the other side.
That doesn't actually mean it was obvious and that the other side didn't have a coherent rational story for their beliefs, or even that the 'winning' side necessarily evaluates evidence in this domain better in general. You need an N of more than 1 to prove that.
Also, more specifically: I'm 100% with you on the side of believing that crypto and web 3 has been a speculation bubble from that start, motivated by grifters and confidence agents. But that doesn't necessarily mean that *every individual actor* on the scene is intentionally committing fraud and lying about their intentions at all times. You still have to make judgement calls in individual cases, and can be wrong for negative judgements.
I agree with you. It's hard to strike a balance in the reply and in how to talk about these things. You're right, I don't have any evidence that some notional "side" that is "crypto skeptics" is generally better at evaluating evidence about scammers than a notional side that is "crypto friendly". I also agree that not everyone who believes in crypto, or even believes in and markets it, is a "scammer" in terms of committing securities fraud, etc. Speculation bubbles are weird. You tell people "buy this coin, and then its value will go up and you'll be rich", and that.... is true at the time! And will be true for an unspecified future amount of time. Everyone is being completely honest and accurate... until they stop being accurate later, and the "dishonest" part often comes in from secondary lies and deception about the nature of the market, the marketplace, and the financial games happening offstage. Which not everyone is aware of, or fully understands when they are told about.
I don't really think that "crypto skeptics" are even a side, or a coherent community, etc, although they probably overlap with market skeptics and small-c conservatism that may have some overlap with "conventional NPR liberal"... whatever that trope really represents about its own population base, etc.
Anyway, to me personally, I wrote the reply because Scott's reply suggested or implied, to me, that everyone who fell for this had no reason to take stock, no reason to doubt themselves or their process for evaluating trustworthiness, and there was no easy way to know about any red flags. I don't agree. If you got rugged by this, in this day and age, you do have a reason to doubt yourself and change your methods going forward. Mistakes are part of life, but learn from them to avoid repeating them. Most of all, I want to push back on the last part. The red flags were widely reported and readily available to be known.
There's a very important difference between "Crypto doesn't have real value/potential, and FTX is a company profiting off of people playing casino-like games, which I think is bad" and "FTX is committing actual fraud and/or is insolvent but hiding it"
Even believing crypto is basically a scam, it was plausible that FTX was no morally worse than a regular casino, which in general are (I assume?) non-fraudulent businesses taking advantage of people throwing their money away.
(caveat: I don't actually believe crypto is a scam, I just think even if crypto is a scam, FTX was plausibly non-fraudulent")
This is a valuable distinction, and other replies also draw a distinction between "crypto is a speculative bubble that will pop and selling it is shady" and "committing financial crimes". This is true... I guess. Not every crypto-related business is committing financial crimes, and not all of them are even lying to and/or concealing material information from their customers, which is also not always a crime, although it is always shit behavior.
To know that FTX was committing crimes, you needed to be paying attention to more specific info. However, that info was also out there. The Bitfinex'd link is a gateway to lots of that info. How many offshore crypto exchanges are committing massive financial crimes? all of them. Every single one. No doubt whatsoever. And there's plenty of evidence. But it is true that this is a complicated and messy topic and that nonexperts can interpret evidence and facts in multiple ways and also get bored and tired and distracted and not know what is a crime and what is true.
So I don't think everyone who didn't realize FTX and every other offshore crypto exchange platform are criminals, is a bad person or an idiot for that. But if it is your job to figure these things out, and you didn't figure it out, you can and should have done so. It wasn't a case where the needed info wasn't available; it was.
Just as a matter of diplomacy, you're probably better off not relying on David Gerard for a big chunk of the support for your argument. At least not here.
Care to elaborate? I have no idea of the backstory here. Anyway, he's by no means and in no way unique or critical to the argument, I just can't be bothered to hunt down other sources for the community.
Gerard is _very specifically_ opposed to Scott Alexander at length, and more generally to whatever he sees as the 'right wing' of the ratsphere, to the point where he had to be [topic-banned on wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard/IncidentArchive1061#Propose_topic_ban).
I don't think it undermines Gerard's position on crypto, specifically (it's probably made him a _little_ more opposed, but only in the tribalistic sense that red/blue affiliation tweaks everyone), but it's an issue in other spaces, and I say that as someone who interacted with him better on tumblr.
Do you believe all crypto businesses are scams?
Blockchain technology has the possibility to change the world for the better but we have yet to get it truly woven into the fabric of our society and the regulating powers that be may ruin it because it makes so many of their institutions obsolete. Right now it's like the internet in 1996. No idea how to invest other than in the broad idea that it will move forward. Cryptocurrency, on the other hand, doesn't have a super compelling use case for developed economies other than being like a wildly speculative commodity.
My favored examples of real world blockchain utility mostly come down to enforcing government transparency. As an example, Nigerian land speculators have found that, instead of buying land from existing owners, it's often much cheaper to bribe a land registrar to surreptitiously alter the title [1].
Consider a land registry on a public blockchain where records can only be updated with two of the following three cryptographic signatures:
A) The existing land owner as stated on the blockchain
B) A land registrar official
C) A state-level judge
Such a system doesn't make it *impossible* for corrupt officials to illegally alter records, but it makes things much more challenging by enforcing transaction transparency on a record system that the government doesn't directly own or control.
Alternately, imagine a system where govt contractors are paid in cryptographic tokens that can only be cashed out for untrackable dollars if and when they're paid as salary into individual worker-owned accounts. When paid from one contractor to another, or shifted between expensing units within a contractor, those transactions live on a public blockchain. If you want to figure out, for example, where exactly the money went for NYC's 2nd Ave subway, it's a database query rather than waiting for the NYT to spend several hundred hours doing investigative reporting [2].
In neither of these examples are you necessarily *replacing* an institution, but rather substantially *reforming* the behavior of an institution by making malfeasance harder to hide.
The primary case at present for blockchain fully replacing an existing institution is, basically, when you want to coordinate crime. Specifically, something like bitcoin is useful as a way of illegally circumventing capital controls and currency pegs in failing states with a hyperinflationary currency.
There are speculative notions about how blockchain could enable things like opt-in transnational states [3], or be used for public goods financing [4], or perform some kind of secure online voting system [5], but none of that's here yet and I don't understand any of this stuff well enough to confidently opine about viability.
1] https://guardian.ng/property/land-registries-remain-cesspit-for-bribery-in-nigeria-says-report/
2] https://www.nytimes.com/2017/12/28/nyregion/new-york-subway-construction-costs.html
3] https://vitalik.ca/general/2022/07/13/networkstates.html
4] https://consensys.net/blog/enterprise-blockchain/white-paper-equitable-public-good-allocation-and-the-unlocking-of-economic-value-through-token-based-markets/
5] https://ieeexplore.ieee.org/document/8853836
In neither of the first two examples is blockchain necessary to achieve the benefits -- if a government is willing to designate a blockchain as its "source of truth" for land ownership it could certainly do the same with a third-party database hosted and run by a neutral party outside their jurisdiction. (For the record, in neither case do I believe it's realistic that a government would actually do this).
And requiring all government contractors to be paid in internal accounting dollars that could only be transferred internally unless paid out to worker-owned accounts might work _better_ run through a centralized Federal database, as there could be a strong validation process to ensure that the worker-owned accounts were actually worker-owned, and that submitted expenses, etc. were valid, contractor organizations had actual existence as corporate entities with Federal tax numbers, etc.
The fundamental argument for blockchain solutions for these types of problems is that they remove the possibility for modification of the source of truth or the transactional history and therefore remove manipulation of the database as a source of corruption, but they are by no means corruption-free (51% attacks can enable violation of the integrity of the chain, bugs can enable all sorts of malfeasance and hacks, and even barring the above, just because a transaction is valid doesn't mean the input data was correct or the participants are actually who they say they are).
I think your comment about whether it's realistic for the government to "actually do this" is important in the discussion. You are correct that the blockchain is not technically necessary, but if the alternative solution is wildly implausible, isn't there value in the blockchain solution over the current system especially in places with significantly more "old school" corruption? Assuming the tech was there to implement this fairly easily and you could really improve transparency--wouldn't millions of powerful local officials feel threatened and work to prevent adoption?
I actually agree with your criticisms and should clarify that I don't presently endorse any of those use cases as, necessarily, a good idea. My intent was to scope out the best presently plausible use cases for blockchain, not to present blockchain as the ideal solution to the problems under consideration.
There are institutions, they are just code instead of people. You can read the code and opt in. Anyone can make new code at any time and people can move to it freely.
Anyone can fix any problem by writing new code and moving to it.
The blockchain part simplifies the distribution and running of code and establishes truth (immutability) and identity (private keys).
Im handwaving a whole lot here but maybe you get the idea?
I think the specific case where a blockchain is useful is where there is a mechanism to do that which would (in a non-blockchain situation) result in a specific guy making the change, but specific guy can also be bribed and he just makes the change without going through the mechanism.
Like, the law is officially supposed to be changed by a public vote in an elected assembly, but in practice, the clerk can just change things and you can bribe the clerk and not bother with the vote. In that sort of low-trust situation, you could put the law on the blockchain and set up the DAO so that the only way to change it is for a majority of members of the assembly to directly input their approval of the change - ie the vote takes place on the blockchain. That cuts the clerk out of the loop (obviously, you can still bribe the legislators, but that tends to be more expensive).
I think these sorts of problems are relatively unusual - ie problems where the official records are changed by bribing the records-keepers.
Also, I think that voting on a DAO is sufficiently technically difficult that I wouldn't trust elected legislators to do it correctly; and they would get a staffer to do it, and now you're back to the original problem: you can just bribe the staffers.
I think Scott's piece on EA as a tower of assumptions is particularly relevant now: https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of.
If EA were a single indivisible idea that includes the FTX affair, that would be pretty bad.
But luckily, it is a series of distinct assumptions. One can be skeptical, for example, of the idea that one should prioritize a high expected value, even if the modal outcome is neutral or negative, but that would be no reason to doubt much more basic EA assumptions, like "not all dollars of charity have equivalent impacts." Or "poorer people generally benefit more from charity than richer people, and by global standards, very few of the poorest people live in Western countries."
Unfortunately for EA, those assumptions are not unique to EA thinking, and EAs now have a lot more baggage.
EA has a few decent insights, some of which are not unique to EA - though EA is making positive inroads to making them more widely considered. Also unfortunately for EA, some of the other hills them seem inclined to die on involve perspectives with limited value for most other people. AI Risk, animal welfare, "weird" Rationalist trapping like polyamory. These were already troubling to a group that was just becoming well known. Now that FTX is getting linked to EAs, they are going to have a much harder time getting positive messages out.
For anything concrete you'd have to go back in time and be smarter than SBF. Here's the archive link (which of course doesn't prove anything)
https://web.archive.org/web/20220317072103/https://donate.thedigital.gov.ua/
Hence including this in the topic of "updating". My prior on "big political donor is involved in money laundering" is high to start, of course. When the donor in question is a finance guy I update higher. When he appears to be a force for good in the world like SBF I revise downwards. When he's caught doing actual fraud, I revise upwards again.
Admittedly, my original prediction of technically-legal finance shenanigans may be much higher than most readers here, but I don't -feel- cynical. It's what allows me to laugh off critics of Bernie Sanders saying he has three houses and a couple of supercars. Well yeah, but I'm sure he got all his assets in ways that are technically legal. He's in Congress, what do you expect?
(Huey Long, when asked how his personal wealth had grown 10x more during his time in office than his gross salary, famously replied, "Only by exercising the most exTREME frugality.")
> The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”
You know, I asked myself some similar questions after being cheated on by a spouse and friend. In the end, I decided the act of trusting itself has intrinsic value. It's not infinite, so you have to take some care, but if you trust 100 times and get burnt once, and that once isn't the end of the world, maybe you came out of it OK?
Also, telling everyone proactively how, when you were fooled, it made you feel bad and take more care in the future means placing trust in you is a better bargain than placing trust in any other random person.
"...but I am just never convinced by these calibration analyses."
I'd be more convinced if the *polls* seemed to take this into account and get better over time.
One thing that I think hurts a lot of this is that folks really want to assume independence because it makes the math so much easier. The underlying reality often isn't independent.
I have a super-short writeup about this and how I think it helps to explain the 2016 election errors by the pollsters.
http://mistybeach.com/mark/math/CorrelationsAreReal.html
Correlations are, indeed, real. But they usually revolve around a "hidden variable". Still, one should realize that that hidden variable probably exists, even if one can't identify it. (But sometimes it really *is* just random variation, and won't persist.)
I used FTX and left some money there way longer than I should have because of EA/SSC/Rats implicit and explicit vouching for SBF. Obviously I don't blame anyone but myself, it wasn't a big portion of my portfolio (sadly I can't login to check specifics) and I'd still trust the related communities more than most but it still seems like a big collective L.
Why are you demanding proof from me? I didn’t endorse it as proven! But it is an angle that was not mentioned by Scott and I thought it should be included as a possibility since he is trying so hard to figure out what to think about this.
https://app.hedgeye.com/insights/122943-marc-cohodes-ftx-is-dirty-rotten-to-the-core-hedgeye-investing-s?with_category=17-insights
Someone else posted this link here but maybe you didn’t see it. I recommend it because even though it isn’t directly relevant to the Ukraine theory, it argues very strongly that SBF was put up to create FTX by much older and more experienced figures and that FTX never made any sense as a legitimate business.
Was this top-level comment meant to be a reply to another comment? You may have been hit by the Substack bug where replies through email don't work.
> make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively be emotionally dead to anybody who goes above a certain threshold.
Do you think you should add to that list a certain blogger who advocates for a monarchy in a time when the more religious party coalesced around a figure who sent a mob to disrupt the peaceful transition of power, given that your association with him has pulled his audience into yours and given your halls a well deserved reputation for racism and fascism among those who have had the good sense to be driven away from that stink? Or are you still being charitable to bad ideas from a dude whose qualifications consist of having a blog with smug essays on it? (I suspect you're about to learn that headlines are short-lived, and those trotted out as stars for a few years can be abandoned and ignored in a mere turning of the times. If you want to stay shining, and I'd like that personally, you're giong to have to think about your mistakes. Thiel won't even look at Moldbug if it doesn't suit his purposes anymore.)
The problem with the SFBA Rationalist cult is very specifically that their anti-credentialism led them to discount the importance of any establishment knowledge and utter crankery is the result.
Someone else called it: hubris. It's hubris that causes EA/Rationalist types to attempt to solve the same problems as everyone else believing their magic online blog juice will prevent them from making the same mistakes as everyone else, so they make not just the same mistakes but the same mistakes from a past era.
Credulity isn't a virtue if an entire community forms around someone who sorts out the most credulous and willing to believe the narrative of genius even secondhand...
Why are you platforming moldbug with this comment on this commonly read blog post that isn’t about him?
In what respect is it even intelligible to say that Scott "trusts" Moldbug?
Only in a very bounded sense — but still a meaningful one: Scott does believe there's positive value in reading him. Some would argue that Moldbug is the type of insidious proselytizer who is not *safe* to read even if you go in telling yourself "I disagree with him on core moral points and always will, I'm just curious to see what his object-level arguments are"; who will pollute your thinking with memes — in the Dawkinsian sense — that will make your thinking trend more right-wing over time without your conscious awareness.
Personally I want to think better of Scott's skill as a rationalist than that; that he would fall for such a "honeytrap"; but a more paranoid/cynical person could very, very easily made the case that this happened to him. That even as he tried to reject the overall worldview, he allowed disjointed Moldbuggian ideas and assumptions to creep into his thinking little by little, Cathedrals and left-swimming Cthulhus and the lot; and that a critical mass of those has caused him to become much closer to being reactionary-adjacent than 2012-Scott is likely to believe reading Moldbugg could or would make him.
I am skeptical that "dangerous meme" / infohazard is a useful concept. Like "misinformation ", people are only going to apply that idea to their outgroups as a way to discourage testing and exchanging ideas, to police social boundaries. (As we see here with Impassionata's excellent impression of a NYT commentor.)
Having said that, even if "trusting Moldbug not to publish infohazard" is an eccentric interpretation of the OP's usage, it certainly qualifies as intelligible.
Why did you write this?
I struggle to believe that you wrote this, or could reread it now, and with the intent to change our minds or persuade anyone here to your perspective. Maybe I'm wrong, maybe you did genuinely think that this was persuasive, but that's pretty hard.
If you wrote this to express your hatred and disgust, and more broadly the hatred and disgust certain factions on the left have for people here, I'd like to assure you that we are all well aware of it and have been for some time. It has been expressed by a wide variety of writers and methods for well over a decade. It was shocking and hurtful years ago; it's normal now.
But, in summary, the rules here are very clear and always have been. I'm not sure anyone could classify what you've written as kind and it's really unclear, to me at least, what is necessary or true of any of it.
PS, seriously, the non-troll way to write this is some variant of "Has this FTX misjudgment caused you to reevaluate any of the controversial writers you've written about in the past or beliefs you've become associated with."
It is so over the top that it reads like a hate generating psy-op.
Why did you write this?
I struggle to believe that you wrote this with any intent but to lecture me on how you wish you were spoken to. I speak as one without regard for your feelings because facts don't care about your feelings. The fact is that Scott Alexander's writings and their **consequences** drove people away. A lot of people away.
I'm here because I like some of Scott's work and still have some hope for reasons I don't fully understand myself.
Scott Alexander platforms someone who actually advocates for a strongman to take power, which no matter what word you put on it (monarchist or fascist) is essentially, in the consequentialist view, a call for violence against minorities of various idpol stripes unless you are a blind fucking idiot too credulous and too easily taken advantage of to be considered a serious political writer. You might not believe it, but enough people do (and those people have a voice, too).
> it's really unclear, to me at least, what is necessary or true of any of it.
Think about it then, and trouble me not with your insipid handwringing about tone, for I don't care. If I am banned for speaking truth then I shall laugh and cross Scott Alexander off my list for good. I always hoped he&his would come around.
Cry less. If Scott Alexander's communities harbored racists and fascists because of Scott Alexander's choices, I want to believe that Scott Alexander's communities harbored racists and fascists.
Is there a standard definition of "platforming" and "harboring" people?
Ah, I figured the PS was a bad move, ah well. I was genuinely curious whether you thought it might convince someone or, more seriously, whether there was some third option I hadn't considered. That happens sometimes.
Have a wonderful day!
> Have a wonderful day!
If I'm callous and cold at least I'm not a disingenuous little shit.
I dunno man - you don’t sound cold so much as inflamed, and the jury’s out on whether you’re a disingenuous little shit or not. Catch more flies with honey than you will with vinegar, though.
Don't think you understand me or my motivations. I'm interested in pointing out the flies and urging for their prompt removal. The failure mode of kindness is unknowing indifference to the malign and deceitful. The racists and fascists know their foul full opinions will get them removed.
Ah, the Passion Flower continues to flourish even when transplanted to a new patch, I see!
Impassionata, I have been highly amused by your writings over at r/drama, especially your version of history about engaging with Rationalism. As my admittedly flawed memory recalls it, you spent most of that time arguing that this time for sure, latest investigation was going to end up with Trump in jail. The last prediction of that kind you made was that within two weeks' time he was going to the slammer. Naturally, this did not happen. Naturally, you were joshed about it. And naturally, you left, set up your own site, and went overboard with the marshmallows.
I'm glad to see you seem to be doing better and have found a new happy home!
I'm glad to see you fell for the marshmallow facade. As to your admittedly flawed memory, it seems like it is indeed as flawed as you admit.
All of my statements about Trump were in the vein of what _should_ happen. Were we not captive to the twin bindings of boomer ideological fog and weak rationalist political confusion it might have happened; recall, of course, that Nixon lost political power very swiftly. This is a mark of how low we have fallen. I was wrong of course about the level of corruption in our politics.
So my statements about time were in this vein: that at any point the axe might fall. Now we see what that axe falling looks like, and the interesting times are ahead for the Republican Party: will it eject Trump like snot into a kleenex, and will the snot metastasize?
The real frailty of the ignorant in the culture threads seasoned by the abominably stupid "You Are Still Crying Wolf" was this: that among generally atheist populations used to seeing blind faith in the nation's citizenry and all the dangerously poor thinking that denoted, it was in denial about a fascist movement on US soil. Shall we drop the signs?
* the more overtly religious party gathering under a strongman type politician via a xenophobic impulse,
* incorporating threats, from that politician, of physical violence against journ*lists
* making blatantly dishonest claims about the veracity of elections
* separating children from their parents in camps
* engaging in a physical crackdown on a protest and then touching a Bible
* convening a mob and sending it in the direction of the hall of Power in which the peaceful transition of Power was occurring
* continuing to belabor the lie in order to further division in what should be a united country.
This is fascism. Fortunately the American public could see what Scott could never seem to admit or even understand: that "You Are Still Calling Wolf" was the beginning of the end of his career as a political writer taken seriously outside of his small circle of neoreactionaries brought in as an effort to expand his audience. (/r/sneerclub is populated by fifteen thousand subscribers: people who were repelled by the stench.)
Still, it's good to see you, BothAfternoon (right?)! It's always good to have a personal herald.
> the end of his career as a political writer taken seriously outside of his small circle
> /r/sneerclub is populated by fifteen thousand subscribers
/r/TheMotte has eighteen thousand subscribers
/r/slatestarcodex has over fifty thousand
In a world where the mainstream academic opinion of Trumpism is that it's fascism and slatestarcodex has 50000 Reddit representatives, are you saying you don't think it's a big signal that 15000 people walked away from SSC?
American mainstream academic opinion is that Democrats are good and Republicans are bad. That part is completely predictable and independent on who happens to be the Republican candidate today. (I am not saying that the opinion is wrong, by the way. Just that it is constant, so we cannot use it as evidence for anything specific that happened recently.)
Also, 15000 is a relatively small number compared to the number of internet users: you could easily find 15000 supporters of a mainstream theory, or 15000 supporters of a conspiracy theory. (Probably even easier for the latter.)
But most importantly, 15000 people in sneerclub does *not* mean 15000 people who walked away from SSC. Many of them have probably never been SSC fans in first place; and would be sneerclub members also in a parallel world where Trump does not exist. There are all kinds of reasons for joining a nerd-bashing online club. Mostly, because it is fun... if you happen to be that kind of personality type.
You really put the "rational" in "rationalization."
Ooooh, mainstream academic opinion!
Well, that sure solves the entire problem of what, who, how and when to believe!
My opinion of mainstream academic opinion is the same as your opinion of all of us on here, Impassioned One.
"All of my statements about Trump were in the vein of what _should_ happen."
Ah, my delicate little petal, it was that you said "he WILL be going to jail" not that "he SHOULD be going to jail" and when your prognostications did not happen, you flounced off.
Well, we can all rewrite our personal history, and the good folk over at r/drama are not going to be too credulous one way or the other.
As for the rest of it - why do you keep expending so much mental energy on a failure like Scott etc?
> it was that you said "he WILL be going to jail"
There's still yet time my voluptuously verbal friend!
> when your prognostications did not happen, you flounced off.
That is how you tell the story, but the way I tell it is that I was just sick of being browbeaten and not being able to return fire. People can bully leftists a lot in a lot of subtle ways that don't catch moderator attention.
I flounced off because being unable to say "that's racist" or "that's pretty much just fascist" is pretty precisely what drove themotte into its present state.
The same pattern emerged on Discord: a community under Scott Alexander with a sidebar community that was for the mask-off racism. It was uncanny and very interesting how it essentially mapped Scott's own mind: a connection to the racist/fascists that was never allowed to be fully 'conscious' as it were.
> why do you keep expending so much mental energy on a failure like Scott etc?
A good question. He wrote something that impacted me personally once, might be the only real answer.
Out of curiosity, just how many SSC and related elements have you signed up to? I've never gone near the Discord, so O fiery-hued blossom of indignance, you are more devoted a follower of Scott than I am!
The subreddit and the discord. Maybe I was a devoted follower of Scott, but his fruit didn't fall far enough from the rotten rationalist tree. He's better than they are, or could be.
I, uh, so Scott Alexander and rationalists generally are secret fascist theocrats, as evidence here's bad behavior from their weird Berkley sex cult?
I jest but there's a core thing here that confuses me. I'm not sure if you've attended rationalist or EA meetups but they're really, really different from the people attending your average, say, Trump rally. And clearly these things are tied together somehow in your mind and I'm genuinely curious what connection you see. Like, I know Thiel backed Trump for awhile but your average rationalist and your average, say, Oathkeeper are just phenomenally different along virtually every significant personality aspect. How do these groups work together, if at all, in your mind?
I respect your mission deeply. I got directly involved through trying to cut through the shitty politics of these people so that's just the beat I walk, without intending to distract people. (The reality might unfortunately be that these people are too thick in their denial for any of what we say to have an impact in either direction...)
Multiple approaches are necessary in cult deprogramming. I'm hoping to wrap my participation up before too long, I've spent an alarming amount of time in this ego charity of mine...
Godspeed fellow traveler.
I strongly suspect there's a motte and bailey definition of scientific racism coming soon, where the motte is some wannabe eugenicist breaking out the skull calipers, and the bailey is anyone who knows what IQ and crime statistics by race look like.
Oooooh, sorry, sometimes I forget that people can live in parallel worlds.
It's because this is one of the few places where smart conservatives can have open conversations. If it's a credible institution or big site, we get censored off in fairly short order. If it's Fox News...it's Fox News, I don't want to have a conversation there anymore than I would want to in the CNN comments section.
To perhaps make this a bit more concrete, I think there's some really interesting arguments around feminism in Lasch's "Women and the Common Life". I'd like to discuss them somewhere and will probably post a few points of interest in the next open thread. I can't post it in a general area or on most social media, I've seen enough people get banned and depersoned to avoid that. Imagine a conservative posting about feminism on Reddit, sounds miserable. However, the best conservative discussions I see, DSL and the Distributist's comment section, aren't really that great.
So yeah, there's no secret Reactionary signal Scott is sending up in the sky. Sincerely, this is by far the best place for intelligent conservatives to discuss things. Everywhere else literally bans us or is a conservative "ghetto".
PS, if anyone does have recommendations for another conservative site with a thoughtful discussion forum, I would greatly appreciate it.
You're sort of right, but in a way that doesn't make a great case for yourself.
I mean, the main counterpoint is that maybe there's a reason those views aren't tolerated by anyone intelligent elsewhere.
"Don't tolerate people who champion race essentialism as racial science" is a recently-erected Schelling-point-now-Chestertons's-fence that came about as a direct response to the holocaust and other similar genocides. If you want to tear down that fence, you have to be very sure you know the consequences before doing so.
>Sincerely, this is by far the best place for intelligent conservatives to discuss things. Everywhere else literally bans us or is a conservative "ghetto".
Once again, consider how saying, with a straight face, "Every place with intelligent conversation bans us and the places that allow us are filled with idiots" reflects on the things you want to say. Maybe everyone else is wrong, or maybe the marketplace of free ideas has judged your ideas to not be marketable.
And it's certainly *possible* that everyone *is* wrong on some things. Hell, it's even probable for at least a small portion. At some point, though, you have to wonder about the sheer number individual of things you're claiming every place with "intelligent discussion" is wrong about in order to ally with all the people in those "ghettos".
(Also, I can't help but point out the irony in using "ghetto" as your insult of choice when implicitly defending accusations of racism)
>Imagine a conservative posting about feminism on Reddit, sounds miserable.
Speaking of parallel universes... some of the biggest anti-feminist communities on the internet have been hosted on reddit. They've cracked down on *some* of them, but many still exist. (I mean, take a look at /r/conservative)
...Unless you're saying that it would be unpleasant because conservative reddit is one of the "ghettos". In which case, I agree that engaging with conservatives on reddit is pretty miserable. I much prefer the quality of their conversation here... on average, at least.
This has been engaged with before. The condensed version of it is "If you set up a space that is free from witch hunts, you end up with three principled people and a zillion witches".
The further problem is, who is a witch? As the post on the Hexenhammer showed, the description of "well she's old and mean and lives alone and has a pet cat and we had a quarrel and then all my milk went sour so clearly she's a witch" isn't good enough.
All we can do is discourage people who go around casting spells and putting curses on people when they do that, and leave the mean old cat ladies alone if they're not riding around on broomsticks.
You want us to conduct a witch hunt. Scott is not, nor does he want to be, Matthew Hopkins.
I'm beginning to think that conservatives' claims that any form of censorship of their views constitutes a witch hunt is an implicit admission that you know you're witches, consorting with the political devil.
(Or maybe it's just that everyone on this site likes making references to things Scott just wrote and it's a coincidence. On the other hand, you and Treb are the only two who've made that reference in this thread...)
I'm not asking for a "witch hunt". You're perfectly free to conduct your witchcraft elsewhere, and express yourself in other ways here, so long as you're articulate. I think the trend of teenage SJWs digging up that one time you said the N word when you are 14, or straight up twisting facts to prove someone is "problematic", is a pox on civil discourse norms.
That being said, I *am* saying that you shouldn't be allowed to practice witchcraft openly in the [idea] market square, while encouraging others to join you, without consequences - and those consequences should probably involve being removed from the [idea] market.
Adding on to that, our mayor, Scott, probably shouldn't be openly reading books on witchcraft and loaning the books out to others to read and - God damn, the longer I torture this metaphor, the cooler you come out as sounding. Be gay do witchcraft.
Weirdly, there's not an idea market czar who gets to decide what ideas are allowed to be discussed in public.
But first you have to define what is "witchcraft" and it's not as plain, easy and obvious as "well clearly everyone can recognise witchcraft when they see it".
Like you, I was (and indeed am) very, very positive about what is witchcraft. There are beliefs, philosophies, and current social paradigms that I think are witchcraft, and worse than witchcraft: the child sacrifice to Moloch, demonic worship.
Like you, I wanted to ban that. There are things I think are pure poison, hateful, abhorrent, damaging to society and harmful to the self.
But you know what took me a long, hard time to come to grips with? That people are entitled to believe these things. That they are entitled to talk about these things. That they are just as entitled to stand out there in the public square as I am.
I don't know if you identify as a liberal or a progressive or what, just that you're 'not conservative'. Well, I was as zealous as you to burn the witches and the heretics. Except my heretic and witch is someone on your side, probably, and the views that you think are right, good, and proper.
I've had to work hard to learn to tolerate the people in pointy hats on broomsticks on your side. Especially when many of them have long mocked the judge's position on obscenity ("I know it when I see it") when it came to things they wanted made legal, but are now turning around and applying the same test themselves: "I know witchcraft when I see it, and I demand it be banned"
https://history.wustl.edu/i-know-it-when-i-see-it-history-obscenity-pornography-united-states
So, first off, this is what I get for not reading the original post in enough depth to see that it was very "HBD" specific and giving a general conservative gripe instead of something specifically on topic. My bad and I apologize.
I'm not interested in defending HBD in general but I'd expect this trope to apply to them 10x. I can't imagine most HBD sights are fun or interesting places to post because most of the HBDers I see here are...not great people. But if there's 1000 cruddy HBD sites and one decent site you can talk on, I don't think there's any great mystery why they keep showing up. As for why the mainstream shuts them out so hard, I agree that it's for a host of good reasons. I don't think the logic you presented is particularly appealing, mainstream society has been wrong on wide spectrum of bipartisan issues within living memory, but I do think this line "If you want to tear down that fence, you have to be very sure you know the consequences before doing so" is incredibly true.
And yeah, conservative "ghetto" is a bad term but I genuinely can't think of a more communicative term. It's the general idea that, because network effects are real, most people will stay on websites they don't like because all their friends are there and only the weirdos go to alternatives. At which point the alternative website is full of weirdos and it's not a very good place to post. Remember Voat? (1)
So yea, bad post on my part, and I would have written it very differently if I'd spent more time reading it, but I fundamentally don't think this is complicated. HBDers come here because, well, it's nice here and (I'd bet) the nicest place that will tolerate them by a wide margin.
(1) https://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/
> sometimes I forget that people can live in parallel worlds.
Miss me with this postmodernist subjectivity bullshit. You so-called 'intelligent' conservatives lived in a world where the Christian fascist theocratic movement installed a strongman who sent a mob into the Capitol to disrupt the peaceful transition of power and now demand my respect as if you have a place to stand in your "oh we just live in another screen."
Yeah, and your screen is wrong and stupid.
> So yeah, there's no secret Reactionary signal Scott is sending up in the sky.
Wrong.
> Sincerely, this is by far the best place for intelligent conservatives to discuss things.
"Intelligent" conservatives in what way? Are you here to reinforce your consensus reality? That would be the Dunce Trap.
> Everywhere else literally bans us or is a conservative "ghetto".
Oh you're still in a 'ghetto,' you just live in denial about it because you chase off anyone who will challenge your views.
Settle down, cranky.
Thank you for your contribution. I hope your day is going well.
Oooh, WoolyAI, you should know from what the Passion of the Flower posted, so how do I get in on this Christian fascist theocratic movement? I keep seeing the progressives telling me that this is happening all around and the Christian fascist theorcrats are running the place since the 80s but I keep not getting an invite, and I can be a Christian theocrat no bother!
Is there a uniform? Do we have our own flag? Are there medals? What are the hours, only I wouldn't be able to devote meself full time to the oul' racism:
https://www.youtube.com/watch?v=6zkL91LzCMc
I'm not an HBD guy, and I don't read Moldberg, but as a conservative who has hung out around Scott's blog for years I can say that it's obviously why his content is compelling to me: it's because he only banns people when they're breaking the Rule of 2 or otherwise being uncivil, and he doesn't mind discussing an idea with someone even if he disagrees with it. I've learned I can trust Scott because he doesn't say things he doesn't believe just to make sure people see him as having the "right beliefs". He cares more about truth than what is heretical to consider. Lately he has decided not to talk about certain topics that cause him to receive more hate, but I trust that he doesn't lie about the things he does write about.
Why do you think his content is compelling to people?
Scott _knows_ why: he invited them in consciously in an attempt to secure more readers; those leaked emails told us this. Whether or not he's learned from the experience is, perhaps, the open question. (The unfortunate part of writing for a large crowd is you necessarily become a subject.)
I support prison sentences for those who commit crimes and those who induce others to commit crimes. I support social opprobrium for those who believe that what's needed is a single individual to take on all the power because that's, truly, dipshit moronic idiot grandiose bullshit. Anyone who can't see that is an idiot who needs to look into the bloody history of monarchy.
> one or two degrees of separation
Your relation to politics is completely broken. You seem incapable of modeling ideologies as directed by leaders except as some spherical cow model of nodes in graphs. I think you could benefit from posting online about politics a lot less and reading about political theory a lot more, for at least a few years.
I don't come in here and say that anyone and everyone who supports SFBA Rationalism should be shut out of public discourse because that essentializes a movement. Thus it is that your attack of Black Lives Matter is braindead stupid, your false equivalence is rejected.
Reading about political theory? Might as well read the Kabbalah or Family Circus, for all the relevance that load of ivory-tower navel-gazing has ever had to events in the real world.
lmao political nihilism is for pseudo-intellectuals
Hmm..I would probably have said nihilism is more the province of the sophomoric. But maybe that's not very different from what you mean, so I suppose I mostly agree.
The flower lashes its petals in righteous fury!
"idiot" "mindfucked" "shitty whataboutism" "fucking idiot"
Ah, how I have (not) missed this level of austere and clear argumentation.
I call it like I see it specifically because I saw racism and fascism as protected speech in Scott Alexander's enclaves. Every single one of them had more ambient rightwing shitfuckery than average. And the moderators seemed clueless to it: it was just the background.
Thus: dunce traps.
> Just as God made me, sir.
Proud. Ignorance. So common among the SFBA Rationalists.
> For example, the democratic state of Weimar Germany empowered a particular monster we're all familiar with and who probably out-killed most historical monarchs.
Thank you for making my point for me. The thirst for a monarch type government is nothing but a thin veil over this desire for strongman politics, made in ignorance of the degradation into violence inherent in empowering a single individual.
> I'm also not interested in silencing its proponents.
There's nothing virtuous about refusing to reject ignorance on some imagined principle: you end up hearing out idiocy and popularizing it (Scott, this is to you).
To the clear leader there is little difference.
"The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”
You can. You have to. If prediction markets are what works to help you, then go prediction markets.
I've been through this with the entire sexual and other abuse scandals in the Catholic church. It's really awful when you have to accept that all the horrible stuff is indeed true, and one reaction is naturally "How can I ever believe anything or anyone ever again?"
I'm still Catholic despite it all. It's the wheat and the tares, and we just have to try and do the best we can until the end. There will always be bad actors, but we should not let that make us doubt everything.
https://www.biblegateway.com/passage/?search=Matthew%2013%3A24-30&version=ESV
One of the deep indignities of the timeline that we live in is that the Catholic Church realized it was infested with pedophiles and took action in response...and that action wasn't to reassign all the pedophiles to a remote parish in Northern Québec where they could be clubbed over the head and buried in shallow graves.
Your comment says loads about you. All good. Nice to share a thread with you and the others here who in the main seem compassionate. I too hope Scott can get past the initial shock and err on the side of trust. What did Reagan say, "Trust but verify"? Trust should be the default for any happy person. To live a life filled with suspicion and distrust is a horrible fate. So my advice is to not globalize distrust because one institution/individual broke the covenant. Trust and compassion are the glue that holds humans, families and societies together from the forces that splinter and sunder us.
Agree! Situational caution and due diligence never hurt!
Three.months ago I had never heard of Ea, prediction markets, FTX, or any number of guests of the modern scene that everyone else on this thread takes for granted. I am old and out of touch; the world moved on while I stayed still.
So my opinions only have limited value; they are what one might hear from a reasonably well educated liberal, put in cryogenic storage in 1979 and just thawed in 2022! I and my impressions are truly from a different era.
But here, for what value there may be, are some opinions.
1. EA seems a good concept, but I detect a little hubris that might lead to cultic qualities down the road (cults were a problem in my era,) But it would be a kind of crowdsourced, decentralized one without the usual charismatic leader. There are obvious downsides to diverting philanthropic energies from small scale present benefits to notional large scale far future benefits. One starves the present to feed a future that may never instantiate. Best, seems to me, to establish some ratio, perhaps 80/20 to do both. The EA community, if it's identical with the rationalist community, seems to over think things a bit; to get lost in analysis and minutiae. Might be best to take a break ever so often and drop the glowing screens. Go outside and hike or do physical labor; put on jeans, boots, and work gloves. Ground. All of this stuff is extremely ephemeral after all!
So much for EA, both admirable and problematic
FTX and the financial world that gave birth to it. Mixed blessings, but badly in need of regulation. Seems fragile, has questionable grounding in real value, so falls under a strange variation of the Red Queen Hypothesis. If notional value and traditional "real" value are competing for resources perhaps we need to look at Competitive Exclusion concepts? Over my head and pay grade, in any case.
Prediction Markets. Ingenious innovations (tho' variants must have been around for a long time). Seem to be gambling under a different name. Are they regulated?
ACX:. You all are collectively the most impressive group of thinkers and writers I've ever seen outside of graduate seminars. I'm seriously not in your league and in over my head besides being behind the times.
That's all. TL;DR! (the time traveller learned finally what that meant. Short attention spans in this era!)
Just so you know, after TL;DR you're supposed to write a short summary for those who did indeed consider it TL and DRd it. It can be a bit more blunt, like so:
TL;DR Put a summary after your TL;DR you oldie
My faults are many.
And worse, I'm not even a quick learner!
[Signed]
Oldie
"You all are collectively the most impressive group of thinkers and writers I've ever seen outside of graduate seminars."
Damned with faint praise.
🙂 you just proved my point!
Regarding your penultimate paragraph, I think there is a strong bias towards noticing higher quality content, and therefore ranking yourself lower relative to it.
For what its worth, I was scrolling through a whole bunch of comments that seemed to contribute little (to me, at least), then I got to your comment, read it, learned a few new terms (e.g. Red Queen Hypothesis) and took a moment to respond.
So at least from this sample, it seems to me like you are certainly not a below average contributor here.
That said, while it is nice to know the limits of one's own credentials, knowledge, and abilities, I think that credentials are a poor measure of intelligence, and intelligence is a poor measure of being right.
Many people can be intellectually gifted, but if they don't bother systematically using those gifts to find truth, then their abilities are not really relevant. The democratization of knowledge through the internet and other media has increased the ability of moderately intelligent or credentialed people to gain a great deal of knowledge on topics of their choosing.
So while I think that you are probably very much in the same league as median posters here, even if you weren't, that would hardly be a reason why you wouldn't be entitled to an opinion or to your own contributions.
Have a wonderful day!
I'm wondering whether we should be expecting to see a system of contractual courts evolve in the crypto space. I can't remember what David Friedman calls them.
I've been dubious about the idea because I'm not sure of where the initial trust comes from.
"I'm wondering whether we should be expecting to see a system of contractual courts evolve in the crypto space. I can't remember what David Friedman calls them.
I've been dubious about the idea because I'm not sure of where the initial trust comes from."
I think the crypto crowd is trying to do this with 'smart contracts' ... also on a blockchain so no initial trust is required. Ethereum is supposed to be one of those blockchains that enable smart contracts (I think).
Smart contracts are basically automated programs that do blockchain operations. They can be used to make automatically enforced rules, like "if X happens, send Bob a bunch of money," and that's useful for finance stuff.
But the issue is that contracts can be buggy. Maybe you can trick the contract to releasing the money early, or sending the money to Alice instead. In that case, a self-enforcing contract is worse than useless - all that cryptographic power is being used to ensure that your money is irrevocably transferred and no court can force it to be returned.
I don't see a way around this problem, because crypto is designed from the ground up to make it impossible for any one entity to revoke transactions - you would need the entire network to agree to that. (Which has happened - once in Ethereum's history the developers decided to roll back a big hack that stole a lot of money and would have killed their proof of concept - but isn't really reliable.)
The problem is that there's really no provision for enforcement. When someone does a rug-pull, the only consequence is that they lose whatever name they were operating under. (There can, of course, be consequences outside the crypto-community, but that's really saying "We need government regulation!".)
FWIW, I'm generally extremely skeptical about the value of crypto-currencies, except as a means of doing illegal financial transactions. (Even then there needs to be some external enforcement mechanism, or all you've got is a con game.)
Does anyone know of prominent voices in the EA movement that were warning ahead of time that FTX was possibly fraudulent? I ask because although Scott addresses that EA doesn't support such things in theory, there's another question about whether EA is just basically competent at evaluating risks. That's supposed to be their whole thing, and yet in one case where we know the final outcome, they blew it about as hard as possible. If you are worried about, say, AGI due to the messages put out by EA, you probably need to take another good hard look at those beliefs.
I think the problem with FTX is worse than if it were fraud. I think SBF thought he understood finance better than other people, and went headlong into something that's been a known failure mode in financial situations for many years - not keeping enough cash on hand to account for withdrawals.
This aligns with one of my criticisms of EAs, that it's mostly made up of intelligent young people who equate intelligence with knowledge and don't know what they don't know. SBF should have known that using client money to prop up his other business even while incurring losses was a known failure mode and that it could easily end in disaster. But apparently he didn't know, and didn't have anyone in the room with him who could have helped with that. If he had a Goldman Sachs executive advising him, he might have been told long before this blew up. It would have limited his reach, and wouldn't have been as exciting on the way up, but it may have prevented the drop.
I feel a little bad for anyone who put a child in charge of their money, but frankly that's how we all learn lessons. The guy is 30 years old now, so people were giving a 20-something billions of dollars in a highly speculative field for him to run out of Hong Kong and the Bahamas. If they didn't know those things, then that's on them too.
from what I understand, it wasn't exactly a Ponzi scheme- it was more like an Enron scheme, where FTX used it's own crypto, backed by actual shares in the company, as collateral for loans to make prop trades. It collapsed when a rival realized what was going on and sold its own holdings of FTX-backed crypto, leading to the Enron-like collapse of FTX.
Which is to say, it was definitely a deliberate fraud (P>0.99 IMO). A bit more clever than the usual crypto fraud but there was no way to do what they did by accident.
I could certainly be wrong, but his behavior in this collapse doesn't seem to me to match someone deliberately defrauding anyone, but instead someone who got caught doing something very dumb and not realizing how stupid his decisions leading up to it really were.
I find it hard to believe that someone capable of setting up such a financial scheme like that would be completely unaware of the financial history of such things, especially considering the recency of Madoff, Enron, MF Global, etc. And it's not like he's some tech bro that doesn't have any background in finance, both he and his accomplice gf had enough finanical background to know exactly what they were doing.
Seems a lot more likely that his current behavior is a sociopathic attempt to play dumb.
That's certainly possible. My gut is still that he's a very intelligent idiot, who knew how to work in financial markets but not why there may or may not be rules and separation. Keep in mind that he was like nine years old when Enron happened and still only around 16 when Madoff pled guilty.
Yea I'm basically the same age as him and have never worked in finance. I know about Enron/Madoff and why investing customer funds is a huge no no. The idea that he didn't is laughable.
I deleted the markets, EAs were taking loads of flack for being galaxy brained, so it was poor timing from me.
I have pretty severe seasonal affective disorder, instead of dealing with antidepressants and light therapy each winter I wondered if I should just up and move down south to Texas or Florida, does this work for stopping the disorder? Have any of you done it and what do you recommend?
have been debating this as well, and have been reminded just how much the winter sucks for me yesterday when we got hit with our first snow of the year and I had to scrape off my car. It took me until recently to realize that even though almost everyone complains about winter, not everyone feels it as severely as I do (and it sounds like you as well). I was reminded reading this post (https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/) linked the other day that it's not universal to never feel fully awake 4 months of the year (even with the max dose of Wellbutrin in my case).
If you're like me and have a partner who loves the winter or is otherwise not sold on the year-round summer of Florida, I'd recommend doing what we did and trying out somewhere very sunny but still wintery (in our case, St. Moritz Switzerland, which Google tells me has 322 days of sunshine a year, and the high altitude means it's intense light as well.). This experiment helped me verify that the cold is just as big a player for me as the light, but it certainly was a colossal improvement over the northeastern US. California, Nevada, New Mexico, and Colorado all have options for winter + sun if you are in the US and want to stay domestic (or if you need more options for possible job locations if you're moving permanently and don't work remotely).
If moving is not in the cards right now, one or two week long trips to Central America in the winter feel like an injection of serotonin that last for up to a month after returning home for me. I work remotely, so I travel on the weekend and don't even take any time off. They're not very complex or exciting trips; I just rent an Airbnb with a patio and bask outside on my laptop all day :)
That's good to know, thanks for your input!
Instead of moving your whole life just rent a place for a few months and telecommute.
Also, as a Dallas resident, our winter is gloomy and cold too, though for (for sake of example) a native Michigander the cold is probably small potatoes. It was dark at 6 pm here yesterday ( Nov 13)
But you may need to consider a Puerto Rico stayover as well.
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America."
I can't speak to the finance side of things (though are they looking at 'is this a scam,' or are they looking at 'will this make money?' those are two different questions and for a while it made money). But the other examples don't seem great to me?
Taking people's money, so long as it doesn't come with strings doesn't usually mean you've vetted/agreed with them, quite the reverse in fact. And the SEC thing was that regulations were needed, which just seems transparently correct at this point? Now, SBF was presumably trying to use them to limit competition, without limiting his ability to commit what really looks like fraud, but it's not at all clear that he would have succeeded in that, even if everything hadn't collapsed.
> 9. The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”. So I was pleased when Nathan Young figured out the obvious solution: make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively [...]
This is being taken way out of context to show what total freaks we are. I don't think people realize this is tongue in cheek.
(This is tongue in cheek, right?)
I think there are no tongues in these cheeks. Unfortunately.
My twitter is blowing up thanks to you guys. Pls keep up the content
I have a question about election odds and prediction markets generally -- anytime I see backward-looking analysis, it all says they're well calibrated, etc. But those analyses I've seen seem to just take one data point of odds during election day -- "if the odds are 60% on election day, that candidate wins 60% of the time" for instance.
But that doesn't seem helpful to me -- what about 1 year in advance? 6 months in advance? Have those odds proven well calibrated? I'm very surprised these markets don't hover very close to 50/50 until about a month out.
I think you'd need a lot more data to get a reasonably meaningful calibration at earlier points, because there's more noise to work through.
"But that doesn't seem helpful to me -- what about 1 year in advance? 6 months in advance? Have those odds proven well calibrated? I'm very surprised these markets don't hover very close to 50/50 until about a month out."
I think assuming a 50:50 split when there are lots of unknowns is one of the classic statistics mistakes.
1 year out, for example, I'd assume that an incumbent US Representative would have something like a 90% chance of retaining his/her/its seat. At the beginning of a football season (college or NFL) every team does NOT have an equal chance to win the championship. Etc.
Point taken, but they still seem overconfident/overly reactive, and to be clear I was talking about aggregate House/Senate odds, not just individual candidates. I'm happy to be proved wrong or my misunderstanding corrected, but I am just never convinced by these calibration analyses.
I'd say that predictions more than 3 months out are untenable. Howver even under that standard PI for instance did horrifically this year.
It is a good point, though. Many prediction markets are open for much longer time than 3 months, and it would be good if analyses took account the prediction forward time span. (Good predictions would more impressive, too.)
The markets longer than three months would be primary markets, candidate markets, etc. The GE market at best can open post primary but personally 3 months is the longest I'd say you can make useful predictions.
Re: point #2
> But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through.
I don't actually know what order of magnitude "a little bit" means here. I'm not a VC or anything, just a fairly boring person who happens to batch their charitable donations to once per year for convenience (yes, I already know this is not generally how charities prefer funders operate). I suspect when Scott asks for potential charitable funders he's talking about bigger game than me, but if a four digit sum of money would make an outsized difference somewhere it would be nice to know about it.
Think the triage process Scott's working on will publish a shortlist of in-trouble charities soon, for small donors like me? Or is there already a post on the EA forum or somewhere that I haven't seen?
Have people done calibration studies on prediction markets? E.g., take all the markets that had $0.60 as the final price for yes and see if 60% of those resolved to yes. I'm especially curious if prediction markets show any systematic overconfidence or underconfidence in their results.
They seem pretty well calibrated: https://electionbettingodds.com/TrackRecord.html
Manifold tracks their accuracy here:
https://manifold.markets/stats
They recently shared some further analysis of it here:
https://twitter.com/ManifoldMarkets/status/1589703623935565826?t=deXkPb8hGxUCgDEdJj3vVg&s=19
Others will have better answers for other sites, but I know Metaculus has a great track record page https://www.metaculus.com/questions/track-record/
Elon Musk weighs in on SBF: https://twitter.com/elonmusk/status/1591895343570243585
Interesting.
I do think a huge areas where our social/political tehcnolgoy is "shitty" and "underdevleoped" is not trating high level bureaucrats and elected officials like jurors.
Poeple in these positions should be "sequestered" and should have their connections/relationshuips highly scrutinized. You would probably not allow a juror on a trial if the jurors former boss's duaghter was the one on trial.
Being elected to say congress should be a 2 year ticket to a bunker in the nevada desert where there is no access for lobbyists and where the infromaiton that is allowed in is tightly constrained to publicly availble sources. Maybe not quite that extreme, but close.
These are the most important positions in our society and the standards are just atrocious in terms of who gets selected. Fuck the president some cycles seems like they probably aren't even a top 10% person for the job. Like there are literally millions (tens of millions?) of people who would do a better job.
On point #5, EA does not endorse doing clearly bad things, but prominent EA people such as MacAskill have definitely endorsed taking big risks. SBF's thinking and behavior is very much in line with the EA idea that an action that will probably fail is justified if it has mathematically higher expected value compared to other options.
For instance, in What We Owe The Future (appendix) MacAskill argues that we should not be afraid to "chase tiny probabilities of enormous value"—in other words, we should take actions with the aim of improving the far future, even if the likely outcome of those actions is nothing. He draws an analogy to the (supposed) moral obligation to vote, protest, and sign petitions, even when again the likely outcome is nil. In MacAskill's example, say you can press Button A to save ten lives, or Button B to have a one in a trillion trillion trillion chance of saving one hundred trillion trillion trillion lives. If you're a normal person, you press A and you save lives. MacAskill says you should press B, even knowing that the likely outcome is nothing.
This is directly analogous to SBF's idea that we should weight money linearly (in other words, rejecting decreasing marginal utility of wealth). SBF is willing to "accept a significant chance of failing" in exchange for a small chance of doing a lot of good.
So MacAskill and SBF both endorse taking actions with a large chance of failing if the expected value is high enough, whether that's speculating with customer funds or pouring resources into uncertain projects with a very tiny chance of shifting the far future in a positive direction.
Now there's a distinction between "very risky actions" and "clearly morally bad actions"...but that line is not so bright. SBF took a risk (morally as well as financially) and failed. But no one would be criticizing him if he had succeeded. FTX took big risks, as EAs advocate, and failed. But EAs should understand and acknowledge that frequent failure is a predictable outcome of taking big risks, and, given these values and assuming the math is correct, failure doesn't prove that the actor's underlying thinking was wrong.
>For instance, in What We Owe The Future (appendix) MacAskill argues that we should not be afraid to "chase tiny probabilities of enormous value"
In other words, Pascal's Mugging?
"But no one would be criticizing him if he had succeeded."
That depends on the specific mechanics here. If he took money out of trust accounts, I don't care if he succeeded or failed, it's not his money to gamble.
(I've mentioned before, MacAskill's worst-case scenario being no change is dangerously wrong, and is why it's important to focus on causes whose results can be directly observed. You can press button a to save ten lives, or you can press button b to have a one-in-trillion chance to help trillions AND a one-in-million chance to kill tens of thousands. The far future loves to juke you.)
As SBF showed, the EA framework is a great tool for rationalizing whatever cause resonates with you. He likes AI stuff, so he donates to that cause. Then the EA branding makes it seem like he is doing real charity in the service of mankind, rather than tinkering with his hobby.
But I expect to see a lot of EAs/rats bending over backwards to distinguish themselves from SBF, simply because he's currently unpopular.
What the heck kind of suppression is this? FTX is an almost unprecedented blowup with huge political implications because they were the second largest Dem donor after Soros, nobody knows key details because they were radically non-transparent, by definition SOME kind of conspiracy was involved in a situation like this so ANY investigation of who did what will be possible to disparage by calling it a “conspiracy theory”, and this is a frigging OPEN THREAD.
Open your eyes, man.
Was this meant as a reply to another comment or a reply to Scotts comments in the post?
It was in reply to another comment
In case this is the problem: if you're replying by email it'll make your comments top-level instead of responses, so you need to go through the app/site for that.
Yes, that was the problem, but I’m not going to bother trying to correct it now, people can just go through the tree below my previous comment to see what I was responding to.
Ok I reread Scott's opening and must have missed the relevant section! Please ignore :)
I still don't get it. Is there a part where Scott says "don't talk about FTX?" Or are they saying it deserves its own post?
My correction was actually wrong too as the original comment was a reply to a convo down thread about some rumored conspiracies involving Ukraine. But I am not 100% certain of the whole thing!
Nefarious activities involving more than one collaborating person which were concealed from the public.
I do not think SBF was the only human being who was aware that FTX was failing and was hiding that information. His girlfriend Caroline Ellison who ran Alameda must also have known, but probably many more people did.
I was just sitting here, before this open thread, thinking about how my inner critic is excessively harsh and just kind of an asshole. Then I open your thread and see what I think looks like you being hard on yourself for what seem to be similar reasons. You’re doing great work, Scott. If you never trust a scammer at least once in your life, maybe you are missing out on lots of chances to do real good?
Your inner critic isn't that harsh. There is a lot more room for skepticism/cyncism in you and Scott.
Crypto is such a big scam. Lots of VCs and investors are into it because there's money in it. That doesn't mean people have to have amazing insight to beat their assessment of FTX, just basic due diligence that while FTX might be a money hose at present it's built on scams. Crypto is useful for crimes and some extremely limited database functions. It's a scam! Always has been. So yeah, easy for people with a basic understanding to beat investors on the question "is this a reputable organisation" even if they should defer to the investors on the question of "whether or not this potentially criminal enterprise will make money."
You what I just remembered? NFTs Remember all that? Over about a 6-month period it went from new thing, to "is this as stupid as it looks?", to that famous long YouTube video ("Line Goes Up"), to, hey, a bunch of scams a rugpulls revealed, and now nobody thinks about it anymore. Loosely speaking. (I notice that, in my semi-expanded view of this entire thread, there are 0 mentions of it.) Maybe there was something to be learned there.
I think "scam" is overselling it. It's a poor investment, certainly. But as a transaction mechanism in certain cases it has a niche. The inherent problem is that people started treating it like they would a new silicon valley startup rather than what is was: ForEx. And huge returns on ForEx are highly unlikely.
It's the old old rule: "if it seems too good to be true, then it is".
You don't get easy money like that, there is no such thing as a free lunch, and eventually the cows come home and the chickens come back to roost. The problem with electronic trading like this is that it is all in the ether, there's nothing real there, so it's easy to shuffle it about and make huge illusory gains - which then turn into real losses.
Whatever about crypto currency as a new unit of exchange, it went the old route of "people want to make money off this thing" and they found they could money by speculation on it more than by using it as a currency, so speculation became the way to make (and lose) fortunes.
People are genuinely surprised by the collapse of FTX. Yet there is no shortage of smart people who have been arguing that crypto is ponzi due for collapse. Nobody who calls themselves a rationalist should be surprised any more than a gambler who loses at roulette. The possibility of collapse was a well known possibility
I had an opportunity to buy Bitcoin in the very early days, and obviously I could have done so since then. But I can't escape my reasoning from back then, which is just as correct today. Bitcoin (and all "investments" that are only valuable because other people are buying them too) really are a scam. It's a pyramid scheme. There's no underlying value. If you make millions of dollars, which many people have, it's literally at the expense of someone else who put money into Bitcoin instead.
How much does this apply to government currencies?
Modern government currencies are backed by, among other things, the very real value of not going to jail for tax evasion. If you do business in a nation, you must pay taxes in that nation's currency or you will be going to jail (or maybe just having all your stuff taken by the government). Regardless of the ethical questions surrounding taxation with or without representation, so long as governments *are* in that line of business, the stay-out-of-jail-for-a-price nature of fiat currencies is a thing of real value that guarantees a real demand for that currency.
Maybe not as much value as you and/or the government were hoping. But if someone offers to pay you in dollars, there is no risk that you'll be stuck holding a bunch of dollars when everybody else says "we now think that this was all just a scam and none of us want your dollars any more".
This is true most of the time, but countries' currencies have become almost worthless before due to hyperinflation. If US dollar inflation next year is 40%, you really don't want to be holding US dollars, even if the IRS remains in business.
Thanks for the strong version of the argument. But:
1. Technically you can pay your taxes with a debit card backed by crypto which gets converted to fiat at the last instant, so tax payers aren’t obliged to own any fiat outside of the last nanosecond before the deadline on April 15
2. Some large percentage of Americans have no tax liability
3. Some large percentage of Americans need bitcoin to gamble online or buy porn or all sorts of other of e-commerce that traditional payment processors look down upon. Not as big as the demand for taxes, but only a difference of degree. The USA is just a very large corporation that chose to accept dollars in payment for its services. Bitcoin will always have some of those albeit on a smaller scale.
Fiat currencies are literally centralized shitcoins backed by nothing and inflated at will by a central bank.
The dollar is just numbers in a database, with supply limited only by the whims of a handful of humans at the fed.
Bitcoin is just numbers in a database too, but the supply is tightly limited by an algorithm and a very strong consensus against ever changing that algorithm. That's a big improvement for the purposes of storing value over time.
"The dollar is just numbers in a database, with supply limited only by the whims of a handful of humans at the fed."
No, fiat currencies are supported by one of the most compelling aspects of humanity - violence.
The controlled application of violence is how they maintain stability, and until crypto currencies can secure themselves in the same fashion they will continue to be a pyramid scheme.
If the product is so bad that you need to use guns to force people to use it instead of the competitor, maybe reconsider the product.
People all over the world want dollars.
Currency is millennia old and one of the best economic coordination tools invented. Cryptocurrencies are rubbish. Bad as tokens of exchange, account or stores of value.
You sound like an economist "sure fiat currency works in practice, but it doesn't work in theory."
Exactly.
Not at all. Governments have two ways of giving underlying value to currency, the first, now out of fashion, is to pledge its exchange for a tangible asset, e.g. gold, silver, or even land has been tried. The second is to be willing to accept it in payment of taxes. Since pretty much everyone owes taxes, and taxes are usually the single largest expense of any wage earner, this immediately gives value to the currency: even if no one else will accept it, your single largest creditor will accept it in payment of your single largest debt. Even if you used BTC for every commercial transaction in your life, if the USG only accepted dollars for payment of taxes, you'd have to keep a big store of dollars around, and they would be valuable to you (and everybody else).
The same would be true for crypto -- if it were widely accepted as payment. That would give it underlying value. However, unlike fiat currency, there does not exist an enormous nearly universal creditor that could give it value all at once, shazam, for nearly everybody, the way a government can. So it has to build such acceptance one economic player at a time, and clearly that has the risk of powerful network effects, both helpful and (in this case) damaging.
Somehow taxes didn't prevent hyperinflation in any of the countries that had hyperinflation. So in what sense do taxes guarantee the value of a fiat currency?
Well, they don't, if you have a government that deliberately inflates its currency, and as far as I know there hasn't been a case of hyperinflation that didn't start off as a quite deliberate attempt to inflate away government debt. It just turns out to be hard to keep the fire under control once you start it.
I certainly don't mean to suggest that government can't *destroy* the value of a currency, they absolutely can, in a number of ways. I was just addressing the fact that government unlike current crypto currencies has the unique ability to *establish* the value of a fiat currency in one fell swoop, and that taking it in payment for taxes is how it's done.
Nitpick: sometimes rapid inflation results from an unplanned currency crisis due to import overreliance, as opposed to an intentional government plan to devalue sovereign debt. EDIT: Carl Pham correctly points out that these situations rarely, if ever, meet the common definition of hyperinflation.
AIUI this is what happened recently with Sri Lanka. https://noahpinion.substack.com/p/why-sri-lanka-is-having-an-economic
taxes are only part of the denominator.
I'll try to be nice: you seem to be parroting talking points (ie politically motivated "questions" which are disingenuous), while your question otherwise implies an ignorance of even basic economics.
So, to answer your question, taxes alone can't stop hyperinflation in situations where hyperinflation is going to happen; there are other ways to avoid hyperinflation. We see the US Fed currently raising interest rates to curb inflation, for example. This is an alternative to raising taxes. We could also just like, let inflation keep running at 8-10% for a while. There is no reason to expect hyperinflation in the US due to current fiscal or monetary policy and there is no indication to me that any political groups with *any* serious influence have plans to promote policies which would even threaten hyperinflation.
It doesn't apply to government currencies, because governments accept their own currencies as protection money. (If you don't pay the government, they'll take your stuff, and perhaps store you in an unpleasant place.) This is what gives "fiat currency" it's value. Because of this, everyone else accepts the money as valuable, because they can trade it to someone who needs to pay the government.
It absolutely applies to government currencies. The question is whether these governments value consistency (keeping inflation low) and how much we trust the government and economy to stay stable. In general for major countries, we have pretty good reason to believe that both of these metrics will do well, or at least remain in a a fairly narrow window. For comparison, you can look at the pricing history of Bitcoin (probably the most stable cryptocurrency) and see how wildly it fluctuates. Even that undersells how volatile it can be, since there is nothing (not even the governmental reputation government currency has) to back it.
There's also the question of whether we have a choice but to use government currency, which for most people is no.
They have guns and the law behind them. Everyone buying in is literally baked in.
That is a YUGE difference.
You'll find that third world countries are not very good at forcing people to use inferior currencies consistently. Guns and law can only do so much. Black markets are inevitable when the black market offers customers a much better deal. It comes down to consumer choice, and one possible future is that the fed allows too much inflation and undermines confidence in USD and causes people to seek alternatives.
Third world countries are inefficient at applying violence in the right place, that is why their currencies are unpopular.
The internet and cryptography makes it virtually impossible to use violence to force people not to use crypto. Fiat will have to compete on the basis of consumer choice to some extent.
Space piracy question: using known physics, is it plausible to catch up to a fleeing space craft and board it?
The limiting resource for space travel is Δv. It scales logarithmically with the amount of fuel you bring, so while the pursuer will have more Δv, it seems implausible that they have ten times as much.
I will assume that the tech to detect ships over long distance is there, this should benefit the pirates. (Hard to do piracy in fog and all that.)
Capturing a spacecraft which has the bare minimum of fuel it needs for its flight plan is not hard: track it, figure out when it will do it's burns, intercept it at some other point in phase space (e.g., match both the position and the velocity at interception time).
This does not see like a stable equilibrium, however. Eventually the traders will carry their own spare Δv.
In that case, the trader will try to add maneuvers to avoid the interception point, and the pirate will do burns to keep up. What factor of spare Δv does the pirate have to have over the trader to succeed?
One assumption would be that the trader is traveling between mars and earth, and both a mars orbit and an earth orbit are safe havens from pirates. So the pirate has do to the interception somewhere en route.
If both start near to each other, it seems like an easy win for the pirate: they mostly have to match the velocity of their victim maneuver for maneuver, and just invest a little extra Δv to close the distance (and get rid of their relative momentum afterwards).
If they start further away from each other, the task seems harder, but I can't really say by how much.
I am also unsure if gravity fundamentally matters for the outcome or if it would be the same if the pursuit happened in interstellar space (where it is probably easy to calculate).
Also, the point of piracy would be to rob goods or claim ships and take them elsewhere than the destination the original owner had in mind. This seems to put limits on the economics of robbing bulky stuff like ice transports. Robbing stuff with a high price density seems more plausible, but these also seem in a position to have a high fuel-to-payload mass ratio.
From a delta-V perspective, catching up with and boarding the target is not the hard part. As you note, just a little extra Δv will do it. The hard part is getting away afterwards.
If the ship you just seized used half its Δv to boost onto a trajectory to Mars or wherever, and needs the other half to slow down when it gets there, then it almost certainly does not have enough propellant to change course to some asteroid pirate base and decelerate for rendezvous with *that*. So you're going to need to use your own pirate ship to carry off the cargo.
Which means your pirate ship needs to have the extra Δv to A: boost onto a trajectory that will intercept the freighter, then B: match velocities with a ship headed for Mars even though that's not where you want to go, next C: change course to Not Mars, and finally D: decelerate at Not Mars. If both ships have about the same propulsion technology and payload fraction, and the freighter uses it for the optimal trip to Mars, you're probably not going to be able to pull that off. If the pirate ship is much bigger or has much better engines than the freighter, you can probably do it, but then your ship is so much more expensive than the freighter that you probably can't turn a profit seizing the freighter's cargo.
And then there's the problem that the authorities will be able to watch all of this from halfway across the Solar system, so they'll either dispatch a punitive fleet to the pirate asteroid base or send a radio message saying "that's a pirate ship headed your way, we know it, you know it, you know we know it, so if you don't want a visit from a punitive fleet you'd best arrrest them as soon as they show up." Piracy really needs for there to be an "over the horizon" where people can't see what you're up to or where you're going.
Doing the piracy in space is a pain. Instead, bribe the harbormaster and hack the cargo loaders. Redirect goods to locations you favor while the ships are at rest.
The pirate can threaten the fleeing spacecraft with a railgun, which will have no trouble catching up.
A more interesting question is how piracy works if there is no stealth in space, and space royal navy can blow the pirate up from a safe distance.
The railgun needs to be fired from a fairly close distance, since dodging an unguided projectile is easy from a thousand miles away. So you still need to match orbits reasonably close before you can start threatening them with violence. Plus you need to match orbits in order to recover the cargo, anyway. It doesn't do any good to shoot down a merchant if their cargo drifts out into deep space afterwards.
Indeed, actually intercepting the ship seems superfluous to the goal. It would seem far more efficient to just shoot a missile at the merchant ship and then threaten to let it strike if they don't divert to your port of choosing, or at least ditch the cargo on a trajectory to your favour. Of course, that's a trick that only works until they start carrying missile defences, but even then an armed ship is always going to be at a massive disadvantage against an expendable strike.
Piracy only works if you have somewhere to sell the stolen goods. If every ship is permanently trackable by radar or ir any ship implicated in piracy will be put on a sanctions list and seized as soon as it tries to land, lest the spaceport harboring pirates get a visit from the space force.
Depending on the circumstances, Mars, or certain corrupt officials on Mars, might be willing to risk a certain amount of collusion with Mars-based pirates until the point that it risks serious retaliation from Earth. Much as Caribbean pirates often relied on a certain amount of collusion, or at least a no-questions-asked tolerance, with local officials.
But I suppose if everything could be seen from Earth and Earth has the means of unilateral enforcement, then there's not much that pirates could do (at least without collusion with authorities on Earth).
That collusion from corrupt officials in Tortuga or Madagascar or wherever, depended on the corrupt officials being able to maintain plausible deniability about the people they were doing business with being known pirates. If the authorities can track the pirates from a distance, and they can, then any port they fly to will have been told unambiguously that they are pirates and anyone doing business with them is an accomplice to piracy.
I can imagine excuses. "Our sensors were down for maintenance! It was a bureaucratic slip up!"
With an added layer of "We poor Martians don't have the privilege of being born on a planet that has water, a breathable atmosphere, and a self-sufficient economy. So sometimes things break down or fall through the cracks. Though there'd probably be less of that if you would send more supplies our way."
But I'd agree that piracy like this probably won't happen if Mars is ever colonized (which I consider a big "if" in itself). I'm really just spitballing here.
There's no practical way to board an aircraft in flight, because of the relative wind that *starts* at hurricane strength and winds up exceeding even an F5 tornado for the planes carrying the sort of cargo that would really attract a pirate's attention. In space, boarding is fairly straightforward if you remembered to bring a spacesuit.
I dunno. Ae we assuming the pirates sneak in, two by two, through the unguarded emergency airlock? Because otherwise there's a lot of ways, many pretty low-tech, to put a hole in a spacesuit. The pirates might be better off trying to put a hole in the freighter from a distance, to let all the air out and kill the crew. (Presumably holding them prisoner costs way more than you can expect in ransom for a nobody cabin boy.) But at least some of the freighter crew might still be alert enough to jump into their own spacesuits, with their guns holstered on the *outside*.
It has always been the case that piracy involved the possibility of battle, with sword and pistol and the unavoidable possibility that a pirate might wind up with a hole through which their precious life-sustaining fluid is rapidly escaping. This has historically not stopped piracy, because pirates have historically been willing to accept risk and have generally been much better at armed combat than freighter crewmen. Enough so, that many freighter crews didn't even put up a fight, because hoping for mercy gave better odds than hoping for victory.
Well, yeah, but that was in the days when 300 of you could jump over the gunwale all at once[1], and overwhelm the other crew in 60 secionds of mayhem. I'm just observing it's hard to do shock 'n' awe when you have to clamber clumsily into the airlock one or two at a time, cycle it...tum ti tum ti tum geez this takes forever...and then...I dunno, wait in the corridor outside for an hour or two, polishing your space cutlass and practicing your footwork, until your war party can fully assemble and storm the bridge.
-----------------
[1] https://youtu.be/3wUkq6JBoMQ
Judging by the progression of major shipping operations to larger and larger ships with smaller and smaller crews, I can imagine a space freighter with a handful of crew and a very big cargo being standard. A small ship with a dozen pirates may be able to easily overwhelm them regardless of how few can get through a hatch at a time.
I can imagine gravity making a difference. At a suitably high tech level (or possibly even the current one) it might be easier to travel between two craft maintaining constant relative position in 0-G than in a planet's gravitational field, for boarding actions.
I’m looking for a software engineering side project that’s fun, useful, and won’t take too long to implement. Any ideas ?
I've been writing a small bot for Telegram in Rust that messages me a few times a day to ask how I'm doing and record the result to a SQL file. I'm using it to build up to making another that acts as a middle-man between two people to filter and modify messages (for non-malicious purposes, more of a roleplay kind of thing). You could do the same for Discord or any other system with a decent API. I dunno if that qualifies as "software engineering" though.
Whilst it doesn't satisfy the ancient hacker saying, beloved by GeoHotz, that "before you hack, let your design pass through three gates; At the first gate, ask yourself, is it fun? At the second gate ask, is it useful? At the third gate ask, is it short?", creating a raytracer in one weekend[1] sounds fun.
[1]Create a ray tracer in one weeked: https://raytracing.github.io/books/RayTracingInOneWeekend.html
Technically a path tracer, though the book "Ray tracer in one week" does show you how to make a ray tracer.
Reimplement an arcade game like asteroids
Does anybody know of a memory aid, such as a mnemonic, for the differences between spondylosis, spondylolysis, and spondylolisthesis?
For the mystified: https://www.youtube.com/watch?v=VZBeNGVPslw
Going entirely off that video, we have:
Spongy (spindly?) losers - Let's all point and laugh at their long-term sad sack discomfort.
Spongy Low-Lying Sis - on the ground that she's fallen down and broken her back.
Spongy Loli's Thesis - you don't wanna know. But you already do.
...best take's probably just learning the latin. I'm sure they'll all come up elsewhere.
Longer word = worse? That's how I've always remembered. Spondylolysis is easiest for me because lysis means to break up, and it's a bone break
Scott's recent post on unfalsifiable internal states made a passing mention of Galton's research on visual imagination, which got me thinking about the topic again and reading Galton's original paper on the matter (https://psychclassics.yorku.ca/Galton/imagery.htm).
One of two things has to be true. Either (A) I am somewhere close to rock bottom on this scale — I identify most closely with the response that Galton ranks #98/100 — or, (B) the people much higher on the scale are either miscommunicating or deluding themselves. The past century and a half of discourse on this topic has mostly been people higher on the scale patiently explaining in small words to people like me that no really, it's (A), and these differences are real and profound. But I'm not convinced.
I do have *spatial* imagination — the ability to hold a scene in my head as an index of objects with shapes, colors, and spatial relationships, and from there make geometric deductions. But to say there is anything visual about this imagination seems strictly metaphorical. The metaphor is a natural one, because humans derive spatial information about our surroundings mostly through sight. But when considering imaginary objects, it would be no more and no less apt to analogize my thought process to feeling around the scene with my hands and deriving information through touch.
Incidentally, I don't dream visually either. My dreams contain emotion, thoughts-as-words, proprioception, and sometimes pain, but I would characterize my spatial perception in dreams as just a dim awareness of what's surrounding me rather than anything visual, like walking in a dark but familiar room. The rare exceptions to this invariably are perceptions of written words.
I have no trouble accepting that there are certain commonplace mental experiences that are just completely missing from my neurology. I already know that sexual jealousy is one of those, and can easily recognize and accept that one because it has easily observable behavioral consequences: that I've been in a comfortable relationship with a polyamorous partner for seven years, while the vast majority of people run screaming from the notion of such a lifestyle. The reason I find visual imagination harder to accept is that it seems like this kind of evidence *should* exist, yet I've never seen it. The ability to visualize a scene in any literal sense, even dimly, seems incredibly useful and should have a lot of unfakable consequences! It should be easy to create a test at which anybody who has it to even a modest degree should be able to easily outperform me. Yet, on some tests that seem like they should work, I come out near the top.
I'm thinking, especially, of blindfold chess. I can play chess with my back turned to the board and just communicate coordinates with my opponent. I've even played two games at once this way, and won them both without making any blunders or illegal moves. Blindfold chess is by no means easy for me — it requires a lot of concentration — but I can do it and I've been able to do it ever since I was very young and a beginner at the game. Yet, most people, even most people who are better at chess than I am, find this ability almost unfathomable (lots of chess *masters* can do it, but I'm nowhere near that level). It seems like any degree of true visual imagination should render this task far easier. I don't understand how I can apparently be near the bottom at visual imagination, yet near the top in this skill.
This all leaves me skeptical that the differences in mental experience are anywhere near as stark as Galton claims. I think that the people who claim much more vivid visual imagination are communicating poorly and insisting that they mean their words more literally than they actually do.
What's the difference between spatial and visual imagination? When I talk about visual imagination I mean I can tap into the mental machinery that processes visual stimuli into a model without having the direct visual stimuli itself.
I "imagine" the Mona Lisa. What's her hair like? It's dark brown and very straight. Is it glossy? Kind of, there's some reflexions on the top of the head. Can I picture her with curly hair? Yes and it feels the same as when I just recalled the memory of the original picture. This I experience outside my field of vision.
It's definitely not the same as seeing with my eyes, but as for literally casting images into my vision I'd call that a visual hallucination rather than imagination.
If you read Galton's paper, a great many people seem to think that they can *literally* mentally cast images into their field of vision, and that these images have the same detail, richness of color, and field of expanse as what is actually before their eyes. I think that such claims are testable, and that testing shows them to be bunk.
I think there's no such thing as free computational power hidden in the depths of your brain.
My intuition (for whatever that's worth) is that if I wanted to visualize a mental image with the same detail as reality, then my imagination would be up to it except that I'd also have to be aware of all those details, and I can't keep that all in my head at once.
For applications like daydreaming, this doesn't matter. Our actual field of vision is also more of an illusion than we think. We focus on something and then we see more details. You can do the same thing with visual imagination, provided you either (a) have memorized those details ahead of time, (b) are willing to make shit up, or (c) a mix.
I suspect that for most people, it's mostly (b) with a tiny bit of (a), but brains don't exactly show their work, so unless you question it, the result doesn't feel meaningfully different from looking at what's in front of you.
I think we agree. Human vision is a mess and our visual field carries a lot less information than we would naively assume. Nonetheless, there's still a lot more information there than we can fit in our working memory. If you make me peer through a tube such that all I can see is the roughly 5° arc of foveal vision where everything is in sharp focus at once and read off a card placed at the end of the tube, you could fit many chess boards' worth of legible information on that card (a legal chess position can encode about 143 bits of information).
I have to agree with you. I am a visual artist and also on the low rungs of the scale (while my spatial imagination is also way above average). Many people who claim a visual imagination are surprised to hear that I am aphantasic, but many artists actually are. In fact an artist, particularly a realist representational artist, is acutely aware of how our brains fool us into thinking we see a faithful representation of the world, when in fact all it does is check a very few points against a pre-rendered model for discrepancies. If there aren't any, we can go on our merry way walking down the street without seeing any of the details, while we actually think we saw every cranny in the pavement - this leaves processing power available for searching for actually important information such as a tiger suddenly leaping in front of you, or simply recognising an acquaintance. You cannot take it all in at once.
Once you actually try to draw what you see, you realise that you weren't seeing *at all*, and that truly seeing instead of just looking, letting all the input in, is overwhelming and exhausting, and takes a good deal of concentration (you'll surely miss the gorilla while you were looking at a ball) and training. You then have to constantly fight yourself to draw what you see instead of the poor, low resolution, idea and edge based model in your head. To actually see in proportion, perspective, value, hue, reflected light, etc, etc, etc. It is actually a humbling and mind expanding skill that I recommend to everyone.
I believe that people with vivid visual imagination do exist, but are vanishingly rare. Watch a video of the late Jung Gi drawing, and marvel. That's what actual visual imagination looks like, no guide lines, no reference, just start drawing on one end of a huge piece of paper and end up on the other. If it would be that common, artists like that would be a dime a dozen; it took Jung Gi decades of dedicated training and constant practice to be able to do that, but there are countless artists who have done and do just that, most of which will never be able to come anywhere close. Even if the hand eye coordination is not there yet, a visually imaginative individual should be able to put down all of the details, however clumsily.
Our brain fools us while we dream, making us believe that to some extent, the dream is a complete and detailed "movie" with all the information in it, but I think what is being played is again this low res, concept based model, more related to touch than to sight (look at the drawings of little children, they draw what they feel, the contours and edges of things, as they recently first explored the world with hands and mouth as babies; and the ideas of things that later come with language and stories), and as in the waking hours, the rest is assumed to be there and not challenged, because in dreams there's no reality to check back against. To which extent you can fool yourself, both in dreams and in waking, would perhaps be indicative of where you'd land on this scale.
Here's maybe something concrete. I can take some funny-looking 3D object and "visualize" it: imagine seeing it in front of me from different angles. I can also imagine touching it and running my fingers over the sides to see what shape it is. These are obviously both metaphorical to some extent, because there's no real object there for me to look at or touch. But they are different processses in my head, so I'm forced to say that "I visualize this object" and "I imagine feeling this object" are both more than just "I can hold the geometric properties of this object in my head".
That being said, I, too, need to calculate coordinates if I'm playing chess blindfolded and want to check where a long bishop move ends up. But chess seems like an unusual test, because when I remember a position, I primarily remember the relationships between the pieces and not their separate coordinates. I feel like I would do better if I were better at chess; the places where I have trouble are places where one "chunk" of my model of the position needs to interact with another "chunk" that I was keeping track of separately.
Geometric properties can be discerned through multiple senses. But if I assign the scene I'm considering some property that can only be discerned visually, such as color, this isn't fundamentally different to me than assigning objects a particular texture or a particular odor. Yet it seems that other people go on about their "mind's eye" but never about their "mind's hand" or "mind's nose".
I totally agree that skill and familiarity with chess allow for "chunking" as you describe. I basically have a big dictionary of positional motifs in my head, and starting from a familiar motif and then filling in details allows me to compress the position's representation. It's easier for me to keep track of my opponent's position when I'm playing against someone at or above my level than someone far below it, because their moves make more sense and fit in with motifs that I already have in my dictionary.
That is interesting, I would definitely say I can imagine images and sounds far easier than smells and textures, which is why I'd talk about a "mind's eye" or "inner voice" but not like, a "mental hand" or "mind's nose".
I'm curious whether you'd say you can "hear" a song in your head, in a way that isn't just remembering the lyrics? If I'm familiar with the song I can easily recall the beat, tune and intonation. I'm genuinely unsure if your reaction is going to be "of course I can imagine music, that's different" or "what are you talking about, you're now saying you can hallucinate sound as well as images?"
I can recall sounds more vividly than images mostly because I can mimic them quietly to myself (as in physically, by tapping/muttering/humming/whistling) and compare those noises against my recollection. If I force myself to remain still and silent then auditory recollection no longer seems particularly different from recollection of other senses.
Well, the mind's nose is in any case a much less useful tool for imagining things :)
I was thinking less about color or texture than about shape. Take a piece from a Soma cube puzzle. I can visualize what it looks like, and I can imagine what it would feel like if I were holding it in my hand. Both of these carry the same information about the shape of the piece, but I feel like - because they're different internal experiences - they are not *just* metaphors for manipulating that information.
There's an entire memory technique based around imagining a familiar location and inserting whatever information you want to remember into it.
https://artofmemory.com/blog/how-to-build-a-memory-palace/
I've tried using this with relatively good results. Imagine a chunk of wood with a hardhat on, and low and behold, you remember to "inform your supervisor of the changes or note them in the log."
I have limited success at remembering things accurately; I'll get most of it right, but things like colors or height will shift around. A silver flashlight with a blue bulb gets remembered as a blue flashlight, but is otherwise correct in shape and size. And there was an event way back in the day where I was shocked to find a picture in my friend's house because I'd been seeing the exact face in my dreams multiple times. (Presumably I'd seen the picture before and forgotten.)
I can't play blindfold chess for beans.
This claim is equally weird to me because when I remember visual images, I can't see how I could possibly describe it as anything other than visual. It's not like seeing it in front of me, it's pretty low resolution (except for the small region in focus), but I can imagine how all the colors and shapes fit together, and if I had the talent I could use the image in my head to create an artistic depiction. My dreams are pretty visually vivid, although again they're low resolution, only "rendering" the field of focus. I'm leaning towards this being an actual difference, although possibly we're just describing the same experience in different ways, I certainly feel that it's not metaphorical but actually the best description of the experience. If you tell me to imagine a specific object, I will picture it in my head in a very specific way, and if you later showed me images of that object I could say which ones looked more or less like the image in my head.
I would struggle with blindfold chess, not because I can't imagine a chessboard with pieces on it, but because my imagining is not a photorealistic 8x8 grid. I can imagine say, white knight at E4, but I can't do that for all 32 pieces simultaneously, and I'm impressed that people are able to hold all that information in their head (assuming some level of abstraction, but chess really is about the little details and I'd definitely get those wrong).
I'm not sure chess is a good test here. When imagining a scene, you choose where each element is, so if you don't remember each piece's position your imaginary chessboard isn't useful; you say you can imagine spatial relationships, and that's the most important element, so *would* we see a difference here?
A better test might involve something like "from this description, imagine which building would appear more fantastical" or something.
The bandwidth of my spatial imagination is nothing close to what I can get from vision. I can remember what piece is where, but if i want to verify the legality of moving a bishop from g4 to d7, I need to double-check the coordinate math to confirm that those squares are actually on the same diagonal, and then think to make sure that f5 and e6 are unoccupied. Having a board in front of me lets me do this at a glance, and even being able to look at *empty* board speeds things up a lot.
If I take the people in the top quartile of Galton's survey at their word, sufficient information should be available in their mind's eye to play blindfold chess just as easily as I can play ordinary chess. Yet the proportion of chess players able to play blindfolded at all is far, far lower than a quarter.
Memory and prediction are not the same skill.. Once you start moving pieces in a blindfold game, you're not recalling an image anymore, you're creating one. If you ask me to remember the ruler on my desk, I'll remember it pretty accurately. If you ask me to imagine moving the remembered ruler from point a to point b and measuring something, both the ruler and the room are going to start shifting and warping, unless I have a memory of doing this exact task before.
I concur. I rank very highly in terms of visual imagination, as far as I can figure it, and the difficult part of blindfolded chess for me is not knowing where pieces are in relation to other pieces (as in, is this bishop on a diagonal to that rook), but in remembering what pieces are where. I can equally easily imagine chess pieces in any number of configurations, which makes remembering where things are on the actual chessboard much harder.
Not that I've ever tried to play chess blindfolded, mind. Its just that remembering where the pieces are over many moves sounds to me like the hard part, recognizing the spatial relationship of pieces I am visualizing is simple. Translating that visualization to grid coordinates also sounds difficult: it would be a bit of a pain to count the rows and columns every time, but presumably with practice it would become easier.
I'm also aphantasic. I'm told that the pupils of phantasic people constrict when they imagine seeing a bright light.
Also, I've experienced waking visual imagination after a surgery (presumably, under the influence of some anaesthetic) and it was very different from the way I normally do spatial tasks. I remember hoping that the capability would be permanently unlocked, but nope.
Now that we are permitted to post frivolous ideas again, it occurred to me that Scott - instead of simply declining a Conversation with Tyler [Cowen] - might consider writing a satire as if one had happened. For example:
T: It's time for overrated or underrated.
S: If we must.
T: The mental health benefits of Ixtlilton. Overrated or underrated?
S: Ix...? No, wait. You can't fool me into thinking an Aztec god is a medication!
T: What is the Scott Alexander production function?
S: Oh, same as everyone else. *Discreetly swallows Ivermectin tablet*
"I didn’t actually tell other people they should trust FTX, but I would have if those other people had asked."
Why? I hope this doesn't come across as an aggressive question, but I'm curious to understand what it was about the situation which would have led you to that conclusion. Was it based on an assessment of the people involved or of the exchange structure?
"I just subscribed to Astral Codex Ten" got SBF only 22 likes (23 is me, right now). Ok, Jesus had just 12. - Nice post. Makes me think of: All the smart and nice people who work(ed) for gov.-agencies/NGO supposed to do good, but really are often/mostly not - or mostly embezzlement of tax-payers taxes. I am still sorry I complained about the silly stuff I was supposed to do for the Goethe-Institut - which got me fired. I might have been able to spend some funds in a slightly less silly way. And I would have gotten me 200k in net extra-life-earnings. - At least I never taught at schools. Well, except, when I did. Can a good person work in a wrong-doing org/company? Ofc not, except when they do. Which is: all the time. “It didn't pay to trust another human being. Humans didn't have it, whatever it took.” "If I bet on humanity, I'd never cash a ticket." Bukowski, obviously. RIP
Why highlight the 40 million going to democrats only? A stronger statement and a better one for supporting the thesis of the point, i.e. "Most people everywhere were hoodwinked including in the political system" is in the article.
"Of that total, 92% has gone to the Democrats, with the remainder going to Republican candidates and campaigns. FTX co-CEO Salame favors the red side of the political divide, donating $23.6 million to Republican campaigns for the current cycle.
The top political contributor was billionaire investor George Soros, who has pledged $128.5 million to the Democrats. Billionaire venture capitalist Peter Thiel, who has backed several crypto startups, was ninth on the list with $32.6 million for the Republicans."
Putting the 40 million went to democrats makes it seem like they were uniquely vulnerable/compromised by ponzi crypto money when it's not the case. Especially now when the democrats are about to lose the house.
"Lower your opinion of me accordingly."
Done.
Your trust in effective altuism and those who believe in its validity is an epistemological red flag.
I am not an effective altruist and find it quite fraught with issues, however I agree with Tyler Cowen's assessment:
"I would say if the FTX debacle first leads you to increase your condemnation of EA, utilitarianism, philosophy, crypto, and so on that is a kind of red flag for your thought processes. They probably could stand some improvement, even if your particular conclusion might be correct. As I’ve argued lately, it is easier to carry and amplify damning sympathies when you can channel your negative emotions through the symbolism of a particular individual. "
https://marginalrevolution.com/marginalrevolution/2022/11/how-defective-a-thinker-are-you.html
The FTX debacle has had an infinitesimal impact on my assesment of crypto, human nevsriousness, or EA adherents in general given that I already believed it was bunkem. The excuses peddled, words written, and reasoning used to buttress EA post FTX have lowered my opinion and assesment of EA. All they hsve done is furthet reinforce my opinion that, to put it glibly, EA adherents and utilitsriand in general suffer from the same cognitive defect that a paperclip maximising AI wouldhave if it turned all humans into paperclips to make more paperclips for humans to use. As they say, it's the same energy, the two pictures are the same, etc.
Well denial ain't just a river in Egypt.
If you don't see FTX as another decent sized condemnation of crypto you a complete fucking moron.
Literally one of the main critcisms of the space, since ~2012-2013, is something like the following:
Crypto is effectively an expensive, wasteful, poorly functioning database. Except with no admin which is controlled by "votes" among large players. No recourse to fix anything or track anything. The big virtue of crippling yourself in this way by using it is that it is "trustless".
Except...using it is overly technical for the vast majority of the population who will be left to place their trust in entities which are actually *less* trustworthy than the banks and states they are claming to want to flee due to lack of trust.
And this is yet another datapoint on that this is the exact dynamic. People worried about the hounds running straight into the arms of the hunters, and getting shot.
I am not a crypto enthusiasts and don't own any. Nothing about the FTX fraud requires it to be done with crypto.
Anyway the quote from Cowen isn't about whether crypto or ea or what have you is good or bad just that this single data point isn't useful for forming an opinion.
>Nothing about the FTX fraud requires it to be done with crypto.
Sorta disagree, the hype around the crypto-sphere is what enabled the FTX fraud. Yes it's technically possible to pull off the same kind of fraud with beanie babies, but it's not the 90s anymore. And the EA/rat community has perhaps played a significant role in lending legitimacy to that hype.
That said I sort of understanding where the Cowen quote is coming from. We're burning a scapegoat here and possibly updating too harshly in some ways. But I've long been a crypto-skeptic and I think the community needs some harsh updating in that general direction even if it comes on the back of a single point of data instead of more holistically.
See that is just rank idiocy. It is definitely useful for forming an opinion. It shouldn't be the only thing you consider, but surely the fact that it is a case where one of the main criticisms of crypto came true seems like it should give you pause. It is *some evidence*.
You and him sound like the goddamn bolsheviks "sure everyone warned if we dispossessed the kulaks there would be a famine, and now there is a big famine, but that is just one data point! It doens't mean ANYTHING!".
The idea that it isn't "useful for forming an opinion" is just sticking your head in the sand.
Not to mention which this is not data point 1 on this in the crypto space, but instead like datapoint 12.
I'm not convinced that the FTX situation is actually an example of the main criticisms of crypto. When people say "crypto is a ponzi scheme", they aren't saying that individual companies have a high likelihood of doing fraud. They're taking aim at the idea that the currency itself has any value. "The companies are actively lying to you about how much they hold in assets" is a critique I see coming from within the crypto community far more often than from outsiders. Stablecoin managers are constantly getting harangued to release public audits of their books, and clearly for good reason.
I think the ponzi nature of crypto was a significant cause of the disaster. The collapse of Terra-Luna seems to have been a major factor in Alameda requiring a bailout from FTX (as argued by https://milkyeggs.com/?p=175), and that collapse happened because Terra-Luna was a ponzi scheme.
Yes, there's a valuable distinction between "this token has no value except your ability to persuade other people to buy it, and eventually no one's gonna want to buy it" (Ponzi), "this token has few protections against theft or loss, far outside of its advantages" (the patio11 critic), and "the guy holding the tokens or maybe dollars for you is _in the current process of stealing them_" (FTX). FTX's specific behavior would have been a problem even had they been 'merely' a traditional non-investment bank!
(Possibly caught earlier, but then again, see Wells-"you wanted an extra account, right"-Fargo)
That said, there is a more general problem that crypto exchanges go bust for perfectly legal reasons on a pretty regular basis. The fraud here is getting additional publicity and scandal, but the EffectiveAltruism forum posts estimates that a little over 1/3 of 'committed' 2022 funding came through FTX or FTX Future Fund, and is talking about making sure people can pay rent, now.
Its... 'validity'?
What exactly does it mean for ffective altruism to be invalid?
Utilitarian offshoots obsessed with using resources for improving the lot of those decoupled from the generation of those resources is just a manifestation of entropy. It is a fundamentally against thr processes that lead to good human existence, and uses self wanking utilitarian ideology to justify thid breaking of selection mechanisms for biological, cultural, etc order that leads to human survival, prosperity, and reproduction. It does so by dressing it up as charity, decoupled from why charity might be adaptive, and is emblematic of the global end of history world as a single utopian village cognitive error.
Tldr; EA (and utilitarianism in general) is analogous to paperclip maximising AI risk but applied to human wetware.
That's before even getting into whether EA is sincere at all, its adherents are liars trying to status -launder, engage in FTX style shenanigans, etc.
There are lots of possible answers to this.
One thing I would say is that it's important to distinguish the ideas and ideology from the movement and people claiming them, and as someone who believes strongly in the former I am deeply sceptical about large parts of the latter - I think that a lot of the "long-termist" things (especially AI risk) that people associated with Effective Altruism push are not actually effective altruism, and detract focus from short-termist projects like malaria and international development where an extra pound of spending probably does much more good.
I think that the large-EA Effective Altruist movement could fairly be called "invalid" (although that's not a word I'd choose myself) if it's not doing small-e effective altruism, and although some of it definitely is, I suspect quite a lot of it isn't.
I also think that there's at least a plausible line one can draw between the kind of high-self-confidence galaxy-brain thinking that leads to favouring AI over mosquito nets and the fall of SBF.
I'd say EA would be invalid as a movement if it turned out that the majority of EA adherents don't actually want to act in an altruistic way, or find out the most effective ways to do so. Instead they could have other goals, like learning to sound smart on the internet (mea culpa), or frauding intelligent, well-to-do people who lack defenses against social attacks from their own (perceived) ingroup, or quickly climbing the social ladder, or stinging other people's brain stems and laying eggs in their bellies.
So if thousands of people suddenly found themselves bursting into wasps, and a major EA proponent was found to be responsible, you'd update away from "EA is a good strategy that many people genuinely want to make happen" and towards "people who fund antihelmintics research and obsessively buy mosquito nets don't actually care about saving lives, they're giant wasps trying to reduce competition from the worm and mosquito people".
Is this the case? Personally, I don't think so. I believe the average EA proponent genuinely cares about other people, would sacrifice some personal comfort to help them, and is interested in knowing how best to do that. I also believe that people who deviate from the as-stated EA norms are a minority, and that most deviations result from morality creep instead of blatant bad-faith actors. I'll continue trusting them.
EA adherents turning people into wasps if it happened to set the value in a utility metric higher is the kind of tbing I would expect them to make excuses for... "but but but but they are *happier* as wasps!" or something like that.
One would guess a disbelief in the accuracy of the adjective.
Yes, I don't think it can ever be effective. I doubt it is altruism.
I doubt both whether altruism is effective as the common understanding of the word might imply, and I also doubt whether the reasoning and underlying imied worldview of effective altruists can lead.to taking effective altruisitic actions.
On point 7 - I think the main anti-crypto claims made by hostile media outlets were that crypto is full of grifters, that lack of regulation makes crypto a dangerous industry, and that crypto is a series of ponzi schemes. And I think the first two of those claims do an accurate if not precise job of explaining (predicting?) what went wrong at FTX, and you could make a case that the third one does too.
A question for guys who make "crypto is sound money/could be the next gold standard/would stop fractional reserve banking/reign in the central banks/more stable than fiat currency" type arguments. Has FTX lowered your confidence?
If SBF was investing his customer's assets and covering the difference in account balances by minting his own token, isn't that basically fractional reserve banking? The whole thing looks a lot like an ordinary bank run to me.
I work in the field. FTX has not affected my confidence in the soundness of the _technology_ I personally work with/on, which is always the part that has been of greatest interest to me. But it has lowered my confidence that it is possible for the _industry_ to function in a way that is good for the world, yes.
(EDIT: I guess I'm not really responsive to your question, since I'm not a crypto-goldbug or whatever.)
Everyone in crypto knows the entire "industry" is scams stacked on top of each other.
I'm quite impressed at the illusion of a legitimate financial instution FTX managed to create, despite being a bunch of autists abusing stimulants on the Bahamas.
It appears that the proliferation of crypto is based directly on the recent low interest rates and free money floating around. There's been too much money in the system for a number of years, and lots of people chasing investments when returns are dropping. It seems quite likely that all of these digital currencies are in danger of sudden collapse if the monetary situation changes. This seems likely in the next year or two.
The problem with treating crypto like gold or some other type of more normal investment is that there's no actual good that can be held. Sure, gold can lose a ton of value as well, but you at least still have a metal with some inherently useful properties. With crypto you have literally nothing if the value bottoms out.
I'm agnostic on the main question, but no; the FTX thing has zero influence on my view. It was a centralized business; none of its faults are due to the tech of ETH.
This. If someone invents an (almost) unbreakable material, and someone else starts a company where they supply locks made from it and hang on to the keys for you, and it turns out they were selling the keys to burglars, that doesn't affect my confidence in the structural properties of the material.
But you won't think the material is going to create a new age without burglary.
But the point is that it's still possible for crypto based institutions to behave the same way dollar/gold/fiat based institutions do.
I mean, you don't have to go through a centralized exchange to engage with crypto. People do it because it's more convenient, but there's nothing stopping you from using DeFi, or practicing self-custody.
Yeah -- and I don't see a counterfactual world in which that statement is not true.
The core idea of crypto (or at least one of the core ideas) is to have tech that does away with the need for trust. This applies to various things; if I hold ETH in a personal wallet, I'm not relying on any other people. But it doesn't apply to centralized exchanges.
One should also point out that as far as I know, there's really no reason to ever leave funds lying around on a centralized exchange. You could and should use them only briefly and then transfer your funds to a personal wallet. (And also worth noting that it's possible to avoid them altogether.)
Don't you need high-speed low-cost low-latency Internet for crypto to have any value? Does that kind of Internet run on trust at all, or require government, or could we imagine it arising spontaneously in a libertarian paradise or anarchy?
You could say this exact same thing about money/fiat.
No need for trust just make sure each day you cash out into hard goods! People don't do that because it is super inconvinent. Ditto crypto. The "trustless" "feature" is not a feature 90%+ of the users are going to be able to take any advantage of, and they are just goign to end up trusting even more shaky institutions.
How is "selective serotonin reuptake inhibitor" parsed?
Is it ( selective ( serotonin reuptake ) ) inhibitor" or "selective ( ( serotonin reuptake ) inhibitor )"?
It's a monoamine neurotransmitter reuptake inhibitor which is selective for serotonin (as opposed to dopamine and norepinephrine). Serotonin / dopamine / norepinephrine are all structurally similar, and drugs tend to have activity on all three systems.
(So of your options, the second one.)
Generally, my surprise at "another crypto outfit goes down in flames among accusations of fraud and deceit" is on the "time to get more popcorn" level. Apparently, the demise of this particular outfit hurts genuinely well-meaning people, not just the usual fools who have not bothered to watch "Line goes up" yet, which is unfortunate. But to me, the big question is, who's next? If, as Scott describes, one company managed to hide behind an altruistic window-dressing and almost achieve regulatory capture - what are the other timebombs that are ticking in the US and European economies?
I wouldn't be too surprised if Elon Musk's empire collapsed next (I feel there has been a shift in public perception from "tech wizard/ genius entrepreneur" to "Bond villain/ bumbling fool", which may make it harder to pull off more stunts). Who/ what else?
If a 'line goes up' company collapses it might leave nothing but a crater behind. Maybe some software and servers to run an exchange. If Tesla collapses the floor isn't the floor way higher than that? Another company will surely takeover or repurpose all those factories and warehouses and the tech running them. I'm a giant Musk critic but at least most of his companies are making real stuff.
I wonder how much Tesla's customer driving data is worth by itself? Any company that wants to compete in self-driving would need to capture a similar dataset.
SpaceX certainly depends on government contracts. If there were a timebomb it would probably be that one. I think the government would happily bail them out though. Seems like a no brainer in national security terms alone, no matter how unprofitable, we need to have some launch capability in the US as insurance at the very least.
SpaceX should be fine. NASA has a clear interest in keeping them running and they are probably reasonably (but not very) profitable. The real question is around Starlink, since Musk has said it isn't profitable and he has sunk a lot of money into building it. It's not really clear to me that it will ever be profitable, and that's something that will create turmoil over the next few years for Musk.
SpaceX could probably spin off Starlink if it came to the worst, and then continue running on NASA contracts. I'm much more skeptical about things like Starship, however. I think the demand just isn't there, and in any case the rocket doesn't make a lot of sense for orbital applications. If it ever works I suspect it will only fly a handful of times per decade.
I thought the claim was that Starlink will be profitable if Starship flies, since it reduces the cost per satellite a lot.
It would reduce the cost of launching the satellites. The satellites themselves would obviously still be kinda expensive, Starlink would still have to launch a lot of them just to replace the losses (15-20% per year), and they'd still have to get a lot of customers to dish out 100$ a month... and I suspect that the regions with a sufficient density of potential customers, but no fiber-optic connections available, will keep shrinking in the next years.
I think Starlink would be bailed by the US government because of the significant military benefit (as seen in Ukraine).
That's what happened with Iridium, so yeah. But it didn't happen until Iridium had declared bankruptcy, so the original owners got nothing.
I think musk's companies (at least Tesla and SpaceX) are net profitable now? I wouldn't be shocked if their stocks crash and Musk personally declares bankruptcy, but I'd expect Tesla to keep existing and producing cars under new management.
From a cursory web search: SpaceX is a private company, they don't have to publish info on their financial status. They SAY they are profitable, which means exactly nothing. They apparently make profits on commercial Falcon 9 launches, but whether that outweighs the money they invest into Starlink and the development of Spaceship is doubtful.
W.r.t. Tesla, you're probably correct.
More on the technical side, frances coppola explains a bit what was going on at FTX-Alameda:
https://www.coppolacomment.com/2022/11/the-ftx-alameda-nexus.html?m=1
"his hedge fund can make money by taking risky leveraged positions, but it has to raise funds, and that's not cheap. And his exchange can make money by charging fees on transactions, but although that can be a nice slow steady income, it's not going to make him the trillions of dollars he wants.
But Joe's spotted an opportunity. The exchange has lots of customer assets that aren't earning anything. If he puts those customer assets to work, he can earn far more from his exchange customers. And he's got an obvious vehicle through which to put them to work. The hedge fund. If he transfers customer assets on the exchange to the hedge fund, it can lend or pledge them at risk to earn megabucks.
Of course, there's a risk that the hedge fund could lose some or all of the customers' funds. And the exchange promises that customers can have their assets back on demand, which could be a trifle problematic if they are locked up in leveraged positions held by the hedge fund. But this is crypto. There's an easy solution. The exchange can issue its own token to replace the customer assets transferred to the hedge fund. The exchange will report customer balances in terms of the assets they have deposited, but what it will actually hold will be its own token. If customers request to withdraw their balances, the exchange will sell its own tokens to obtain the necessary assets - after all, crypto assets, like dollars, are fungible. "
What was the legal status of the assets held by the exchange? Were they considered bailments or deposits? If they were considered deposits, would that not turn the exchange into a bank? Was it regulated as a bank and did it have a reserve requirement?
Thanks for this. I feel like I’m much closer to understanding this implosion than I was before!
What I still don’t understand: This seems like it would only work if they were conjuring tokens out of nothing and declaring them to be fixed to a dollar value (otherwise their assets would fluctuate with the value of the token, potentially below the 1:1 replacement value that allows for fungibility.)
Was there a market for FTT or were they just declaring what it would be worth?
They were basically conjuring FTT out of thin air, but FTT were thinly traded enough that they could control the price and peg it to what they wanted without much expenditure of resources. Except when suddenly everyone wants to trade their FTT in for something with real value the problem becomes you don't have the resources to back up the FTT.
Just typcial ponzi scheme stuff. I can give you an "IOU" for $4,000, and just pay off the people who stumble in asking for their $4,000 one at a time, even with just a small cashflow of a few tens of thousands of dollars a day.
But if everyone starts coming and suddenly I need $400,000 today (or $4,000,000,000) instead of $16,000...I am screwed.
Got it, thank you!
(And wow, seems like they should’ve foreseen this blowing up.)
This is good, but this assumes that this was the plan all along and the entire organization was in on the fraud.
The theory most people in the space somewhat believe is the following:
1. Alameda takes huge losses during the LUNA saga.
2. For reasons, SBF and the core exec group of FTX decides to bail them out, using a back door to lend then billions in FTT tokens.
3. Alameda then deposits those FTT tokens and borrows against them in the tune of billions, to get themselves out of the hole.
4. It all comes crashing down when the balance sheet leaks, CZ tweets that Alameda holds a lot of FTT and plans to sell his own, and rumors spray quickly that CZ's firesale of FTT tokens might affect FTX solvency, creating a run on the brokerage.
"This is good, but this assumes that this was the plan all along and the entire organization was in on the fraud."
I don't know if this was the plan all along, but if it were, I don't think they regarded it as fraud. First, from what I'm reading, only a handful of people really knew what was going on with all the juggling around of funds etc., everyone else just did their jobs and did what they were told.
Second, I don't think Bankman-Fried and his trusted handful thought of things like this as fraud, they thought of it as "high-risk, go for the 51% chance as the game theory maximisation of utility one-boxing strategy teaches us" - they relied on being Very Smart and Rational(ist) and that this was a bunch of funding just sitting there doing nothing when it could be put to work and earn huge returns and Do Good. It would never go wrong, because they were Too Smart for things to go wrong. Maybe the dull boring red-tape bureaucracy said this was a big no-no, but hey: Silicon Valley, move fast and break things, this is the new generation so get out of our way grandpa, the future belongs to us and anyway you can't even understand how crypto works so how are you gonna tell us what to do?
It's Kipling and the gods of the copybook headings all over again.
Regarding the gods of the copybook headings, doesn't Kipling explicitly contrast them against the gods of the marketplace?
In which case it seems likely to me that the gods of the marketplace were the authors of both the rise and fall of FTX and that the religious platitudes and folk wisdom of the gods of the copybook headings had very little to do with it.
Of course I never did properly understand what Kipling was getting at with that poem. Perhaps I should read the Wikipedia article on it.
Well maybe it wasn't fraud in their heads, but a brokerage lending client funds to another client is fraud in most jurisdictions.
Lending assets through a back door so that internal compliance and audits cannot flag them seems like fraud to me.
Finally, the risk management process of providing lending value to your own token is very irrational. Maybe in their heads the end justified the means but boy did they do some pretty poor risk calculations.
Thanks for this, I hadn't seen anything on "what did FTX do wrong" until now.
The lack of attention paid to global traffic deaths is genuinely insane. In the US alone, 30K annual deaths and countless serous injuries is genuinely insane.
As others have amply noted, lots of people pay lots of attention to traffic deaths, and have done great things in making driving safer.
We have *also* paid lots of attention to the *benefits* of the unparalleled personal mobility provided by widespread use of automobiles. Aside from a relative handful of extreme WEIRDos in places like San Francisco and NYC, we've all decided that these benefits are worth the 0.01% chance of being killed in a car accident next year. If we can reduce that to 0.005% or 0.001%, great, but we're not giving up any of the utility to get that.
If you disagree, fine. We're not going to rearrange the world to your tastes, but we're also not going to force you to get in a car or live in one of the places where driving is a necessity.
So I take it you're a big supporter of repealing all parking mandates and raising the gas tax to $4 or so to account for the full externalities of driving, so that the rest of us aren't subsidizing you anymore?
Hmm, what about the subsidy you get from those who drive? Who delivers the groceries to the grocery store in a big truck so you can cycle down there and pick up all you need in your backpack? If the electricity goes out on your street, are the workmen going to take the bus to get there? How will they bring their tools? Are you willing to fork out much higher prices at the fast-food joint so that the line workers can buy houses in your neighborhood instead of commuting (by car) from where it's cheaper?
Driving isn't ubiquitous because everyone loves cars, or from perversity, but because it provides enormous economic advantages, both personal and collective.
Nope, it's the cause of trillions in government subsidies and laws requiring car-oriented developement going back to the Eisenhower administration. Of the things you listed, some aren't required and those that are could simply pass prices on to the consumers - zero reason to do it through inefficient government mandates and subsidies.
Huh. Well, OK, if your belief is that vast conspiracies account better for the current state of affairs than a few billion people[1] making the decisions that are economically and personally advantageous for themselves...hmm, not sure what to say. You're in good company, of course. Many people think any number of aspects of the current world are the result of fiendish and shockingly broad and long-lived conspiracies. It's a point of view.
---------------------
[1] I assume you can rationalize the fact that your conspiracy took root all over the world, i.e. was not restricted to the Eisenhower Administration in the United States, and was effective everywhere from Jakarta to Timbuktu.
That's... Not remotely true? The US is a massive outlier on car centricity. And it doesn't require a conspiracy, city planning paradigms vary by era and the late 20th century top-down planning paradigm was pretty car-focused before most places realized the downsides and started walking away from it.
I actually don't drive much, primarily for this reason. I've still got a learner's permit when I could have gotten a license a decade ago, because any time I'm on the road, I'm wondering whether I'll contribute to the enormous number of deaths.
Obviously just not driving isn't a scalable solution, at least until self driving cars take over or public transit vastly improves. In the short term, I'm not sure what we can do.
Umm it is something car companies and engineers, and the regulatory envionrment spend literally tillions on. I don't think there is a lack of attention.
it is jsut a hard problem, and also you know, everyone dies. Constructing a society where no one dies except by old age is not desirable, and would in fact be miserable.
The trillions are spent on improving capacity, not saving lives. If engineering was placed on saving lives, you'd see bollards on busy street corners instead of breakaway street light posts.
You are just wrong. A significant portion of the design and build costs are explicitly and solely about safety.
I mean, this is the entire debate that Confessions of a Recovering Engineer by Chuck Marohn is about.
I know chuck, not very impressed.
How does this divide up in terms of deaths of car occupants, pedestrians, and cyclists?
Actually, quite a lot of attention is paid to traffic deaths. I'm in my early fifties, and since I was young things have changed significantly: we have graduated licensing of drivers now, cars are sturdier and have systems like air bags to protect passengers in the event of a crash, and we no longer wink at drinking and driving. And as a result it is safer to travel by car than it used to be a generation or two ago.
https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year
The US had 12.89 traffic fatalities per 100,000 people in 2021. The rate was 26.42 in the glorious year of 1969.
You're discussing reducing deaths *rates* by one half while barely decreasing the actual number of deaths... this is wonderful mathematical thinking that doesn't acknowledge that *almost none of these deaths would happen with changes to our transportation system.*
Almost none?
I've been in a couple of car accidents (thankfully none that have killed anyone, but there have been some serious injuries), and I used to pick up dead bodies from car crashes as a job, so I get what you're saying. There absolutely is a huge human cost.
But I think you're failing to appreciate just how important cars are. You can increase the use of public transport on the margin, but going down to "almost none" is not realistic.
I live in hope someone actually manages to figure out safe self driving vehicles, but until then, buckle up and be careful.
Again, you can have cars and *extremely low* traffic death rates, just look at Sweden, Norway, or even Ireland. Hardly anyone would argue that cars aren't a major part of their society. They have different infrastructure, and driving is a privilege, not a requirement.
Sweden, Norway, and Ireland do not have "almost no" traffic deaths.
If you're arguing that reducing traffic deaths somewhat is doable and worth it, great, I agree. But "almost none" is not realistic.
If you have a proposal for how our transportation system should be changed, feel free to present it. I expect quite a few of us will be ready to critique it.
I think automobiles are an effective tool. When people are using an automobile to go to the grocery store, we've gotten to the point that we are needlessly putting people in harms way for no reason other than a slight convenience.
Groningen, Netherlands is an model city for transit reform. We needn't completely abandon the car to make it extremely safe (lord knows our highways are *extremely safe*), we just have broken our towns and cities and made them extremely deadly by prioritizing the automobile over safer modes.
When the number of people dying is both large and young, the number of life hours lost significantly outweighs other highly mortality events.
Things have improved in some ways, but this is partly counteracted by cars getting bigger - this is slightly safer for the people inside the car but way more dangerous for people hit by it.
(And while the US has improved on this, the improvements stalled out 20-30 years ago, which is why it used to do better than Europe but now does much worse).
How much difference does it actually make whether a pedestrian is run over by a sedan or an SUV?
About twice as likely to kill a pedestrian, conditional on a crash happening. Crashes are also more likely to happen, since SUVs have worse visibility for pedestrians
(See e.g. https://medmal-law.com/are-suvs-more-deadly-to-pedestrians/ )
>>>partly counteracted by cars getting bigger -
Which is, itself, a side effect of the federal government requiring fuel efficiency improvements that were impossible to meet - at least at a price consumers could pay. Therefore manufacturers made the midsized cars larger which meant they had a more relaxed mpg standard. This was also driving part of the used car price hikes - midsized cars were much rarer.
The problem here isn't fuel efficiency improvements, it's that they made light trucks exempt from them. The tax and fuel efficiency code should definitely be revised to make sedans more economic than trucks or SUVs.
Please see above physical impossible improvement. And light trucks of certain wheel bases were included!
Yep, same age group and interest here ;) Still, Germany got down from 20k human roadkills a year in 1970 to barely 3k now. So, the US is kinda lagging behind (drivers too young?). - UK safest. After Sweden, but Sweden is too exotic, right? - Public attention in Germany was: One short text in the paper once a year in 1980. In 2022: "tragic" crashes nearly daily on TV (lumped with news about celebrities).
Probably the right metric here is traffic fatalities per passenger -mile, not per capita.
Every state is different, granted. But halving the amount of fatalities is still less good than halving them thrice over in the same period of time. We did not do really revolutionary steps in Germany to reduce traffic deaths - still no speed limit on many miles of our autobahnen ;)
Isn't that slightly incongruous with the energiewende? It seems strikingly odd that lots of places in the enormous and energy-profligate US have 55mph speed limits. But in environmentally conscious Germany you can take your 5 litre Merc and blast up and down the autobahn at 150mph all day long. Am I missing something?
150mph? Well, at 250km/h the Mercedes will usu. stop accelerating (in-built feature) - but one can go 260 miles(!)/ph, too : https://www.youtube.com/watch?v=7pg1hhW5qhM the police checked the video and agreed: legit*. - Freie Fahrt für freie Bürger! (free citizens need FREE-ways).
In real life, A) a speed limit at 130km/h would not change much, maybe 2 percent less petrol in all.
(speed limit at 55mph/100kmh? That proposal will kill your political career.) - But B) "stretching" the exit from nuclear power also just added a few percent power. - the GREENS want A but not B, the "FDP (free liberals)" want B but not A. The lobby claims: consumers worldwide buy BMW/Porsche/Audi/MB because of the dream of racing our German highways with them. What's your take on that??
* "For those that say it was irresponsible and dangerous:
4:50am on Sunday / 10 cars per 10km so 1 car per 1 km
Good visibility is about 3-4km straight ahead so there is enough time to react.
The Chiron can brake from 400 to 0 in 9 sec. within 490m. All cars are in the far right lane.
There was an earlier drive through the section to make sure there is nothing on the road.
There is a fence along the whole stretch of the highway, so no animals can interfere.
3 people were spotting on 3 bridges for maximum safety.
If you still think it’s irresponsible and dangerous, well, we respect your personal opinion."
There is a very strong lobby of those who love to be able to drive as fast as they want. In the current government, they are represented by the FDP - the liberal democrats. Actually, the proponants of having no speed limit mostly associate this in public discourse with 'freedom'. Interestingly, compared to US for example, in Germany it's no problem to our 'freedom' to get fined for specific (really radical) things you say in public, or to of course *NOT* be allowed to carry a weapon ... but having a speed limit would *really, really* limit our freedom.
That's actually not a majority position any more, and there has been some public pressure to implement at least a temporary speed limit, while we have this energy crisis currently going on. But, as in first sentence.
Interestingly, many of our highways are so crowded, or so full of construction sites, that you actually can't drive that fast anyway. But having the theoretical option is all that counts!
PS: There is also the lobby of those who build very fast cars, of course.
Passenger-mile is problematic because a lot of the potential safety improvements are by better transit/urban planning that reduces total miles driven.
I think that's overwhelmed by the much stronger effects of, "hey, the US is big, you're going to need to cover more ground," and "the US is wealthy, more people are going to buy cars."
Also, the intervention of, "we should just completely rewrite all physical infrastructure so that American cities and towns are small and dense" is a fun counterfactual but is not really a possible intervention.
Fortunately there's a huge range of easy improvements (raised crosswalks, speed bumps, red lights cameras, protected bike lanes in the many cities that do have reasonable density, TOD and upzoning near transit, adopting international best practices for transit, taxing oversized vehicles, etc) that don't require anything extreme like that, so this is pretty much irrelevant until we do all of those.
Is there any low hanging fruit for minimizing this risk personally? Working from home seems like it would have the biggest impact (my commute is easily >90% of my total driving miles) - anything else come to mind?
Slow down at intersections. Even if you are a perfect driver (and who is?), someone else can just blow through a red light without realising. Give yourself a chance to react if that happens.
Vary your commute. Unless you are drinking or texting, your next most dangerous behavior is probably zoning out because you've done this trip umpty times before and are doing the same thing that has always worked. Only *this* time, for the first time, someone is actually pulling out of that alley overgrown with weeds from which you've never ever seen a car emerge, and which you have long since unconsciously classified as "not a road."
There's some very interesting work out of Europe that shows that *removing* road signs improves safety, which suggests that a dulled attention due to assuming you already have all the data is a significant factor in accidents.
Off the top of my head:
a. Don't drive while impaired by drink, drugs, or sleep deprivation.
b. Drive a reasonably modern car with modern safety features. Bigger is mostly better for your survival.
c. Don't screw around with your cellphone or other distracting things while you're driving.
d. Drive fewer miles--eliminating a long daily commute is a big win here.
e. Avoid driving in bad conditions (rainstorms, snow, heavy fast traffic, late at night when the bars close, etc.)
Working from home or having a lot of flexibility in when you work are big wins for (d) and (e).
Probably the lowest-hanging fruit is time-of-day you drive. https://injuryfacts.nsc.org/motor-vehicle/overview/crashes-by-time-of-day-and-day-of-week/
Slowing down. Even though you can't control the other car's speed, the total energy of the crash will be less.
Not drinking and driving. Being Female. Avoid rural or semi rural highways. Avoid driving during afternoon rush hour.
Your intuition is right that just driving less will have the biggest impact.
Here is a cool sight form the NTSB with detailed stats on crashes: https://www-fars.nhtsa.dot.gov/Vehicles/VehiclesAllVehicles.aspx
Bear in mind that an increasing number of the deaths can be attributed to 'people not doing what they should' which has implications for the results of creating even more rules.
This is why road design (and potentially automated enforcement, and maybe in the future things like speed governors) are better than just making better rules.
Automated enforcement- probably going to slam right up against the race concerns in the US. Many competing interests.
I'm sure the "crime should be legal" style left (which is unfortunately large in the coastal cities where this is most needed) would object on these grounds, but automated enforcement is generally much less racially biased (it literally doesn't see race), so I think the main actual objection would be from drivers who like speeding.
You are assuming that that bias is in the cops actions and not in the difference in driver action. Also related: if a car is driven by someone not the owner, it is not right to ticket the owner, which is what happens in automated systems. Finally, none of this addresses the issue that a ticketed person must still pay the fine when it arrives in the mail.
A) I'm fine with people getting more fines if they really do commit more crimes
B) it's completely reasonable to just say the car's owner is still responsible. If you let your friend drive your car and he got a ticket, it's on you to make him pay you back.
C) no, this is one reason we still need cops as well as automated enforcement.
The fact that it is "less racist" isn't going to protect from heavy attack for racism you when the results turn up that speeding tickets are 50% black 50% white in a 15% black area (or whatever the numbers would be).
Also I don't think it is clear it is less racist. Some of these automated systems have come under attack for racism specifically because some current enforcement methods are anti-racist through design or happenstance (less policing in black areas, or officiers actively trying to not have all their tickets be to one race).
Yeah the racial component brought me around on automated systems. Also that it's kind of a waste to have expensively trained police doing traffic enforcement when they could be doing more effecting policing.
I think automated enforcement makes sense, but drivers tend to adapt to it, so you still need some cops doing traffic enforcement, at least for really bad behavior. But yeah, having 95% of the speeding, running a red light, passing a stopped school bus, etc., tickets issued by machine instead of by a cop seems valuable--let them worry about the other 5% where someone's driving drunk or drag racing or something.
Yep. I think it's getting a bit more attention the last few years with the new urbanist yimby movement (if still way less than it should), but that could also just be my bubble.
Heavens, what lack of attention? I can think of relatively few things that, for the carnage they cause, have received *more* attention in the last 50 yeras. We got seat belts, and then air bags, and crash testing and all kinds of improvments to vehicles -- collapsible steering columns, engines that "drop" on impact, roll cages -- we got mandatory seat belt laws, and much stiffer drunk driving laws, more rigorous requirements on teen licensing, and phone use while driving laws, and now insurance companies are even hawking apps that will monitor your driving and reduce your rates if you drive more safely.
Presumably all this attention has something to do with the fact that automobile fatalities per capita have dropped 50% in the last 50 years. We should be so lucky to have something like that happen to e.g. drug overdose deaths, which according to the CDC top 100,000 a year lately, 2-3x more than traffic deaths.
Cars have gotten more safe over the years, but I think illegal drugs have gotten less safe thanks to cheap fentanyl being available and hard to dose properly. (I mean, a pharmacist with a proper setup could dose it properly, but a semiliterate biker mixing up the drugs in a trailer probably can't.)
I mean, unless the buyer is asking for fentanyl, the proper dose is zero fentanyl, which is easy enough. The problem with the semiliterate biker is not his lack of pharmaceutical chops, it's his reluctance to give the customer what they paid for.
Agree that the lack of attention is worth noting.
I'd add that there's an awkward truth hiding here that is even more interesting:
If we really wanted to reduce total traffic deaths to zero or at least get closer to that number, it would be relatively simple to have a law making it illegal for car manufacturers to make cars that go faster than say 80 kmph or something like that. Exceptions could be made for ambulances and law enforcement, maybe even for trucks and other pieces in the logistic funnel etc in order to keep the economy running.
Just by crippling all modern personal vehicles we'd literally be saving millions of lives overnight.
The fascinating truth is that society seems to be willing to sacrifice a pretty large number of people for the sake of convenience. I'm not even sure its the wrong choice. Getting places quickly by car is a huge part of modern living. But its not a choice anyone seems conscious of making.
Have you ever read The Left Hand of Darkness? On the planet where the book takes place, cars go as I recall only about half as fast as on Earth, not for any particular physics or engineering reason, but just because the inhabitants, unlike us, simply assumed that safety was more important than convenience.
Interesting that someone was thinking the same way as you over fifty years ago. Perhaps since then we've all just gotten so used to fast cars that it's become harder for us to notice the cost.
Interesting! Admittedly I got this idea from my dad, I doubt he read TLHoD but I'll let him know.
>we'd literally be saving millions of lives overnight
Extending lives, zero of those people would live forever. Sounds dumb, but actually an important consideration in broad scale public policy discussions.
Lmao! Touche.
OK I'd love to understand the policy implications of considering it "saving" vs "extending".
One example I can think of is COVID since the vast majority of mortalities were very old people. So even if a lockdown saved X people, that's a lot less extra hours of live saved than stopping X fatal vehicle crashes, since to the best of my understanding many more traffic victims are very young.
We can talk about "saving lives" when we save enough "average life hours" to equal an entire life.
I mean, sure, you can adopt a definition of "saving lives" that makes it so that no (human) life has been saved, ever, and that it's decently probable that none ever will, or you can actually use words in accordance with common usage.
Well the difference between "Saved lives" and the reality of what is happen is actually pretty important in public policy discussions, so ti is worth being pedantic about it, and fuck "common usage".
Things like heat waves and hurricanes and such often might "kill" 60 people, but almost all those people were on life support or have very serious medical issues and were goign to die soon anyway.
Plus "saving lives" has an emotional appeal that sort of wrecks the reality of the actual calculus of the situation. Which is you are just pushing back the date of death. We are all on the same conveyor belt to nowhere.
In Sweden there is a "Vision Zero" (https://en.wikipedia.org/wiki/Vision_Zero) aiming for zero traffic deaths. Of course it is not working in a literal sense. But last year Sweden had 192 traffic deaths which, considering Sweden's 1/30th of US's population, would translate into 6k American traffic deaths. Point is, a lot can be done if there is will. And in some places there is sligthly more will than in other places.
Agree. About a million people a year killed on the roads.
For me there's a big contrast to those who want to doom-monger about climate disaster deaths, which run at around 20k per year (6k last year). Also, for as long as records have been kept, the death rate on the roads has been going up, and climate deaths have been going down! And yet the freaking out is invariably about the effects of floods and storms, not the 100 times worse killer - motor vehicles.
The *rate* has been going down for 50 years. Do you mean the absolute numbers? But even those have been declining in the United States for the past 25 year or so:
https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year
Huh, It's kinda interesting that total deaths have remained roughly constant.
Yes it is. One wonders if it's accident or there is some reason. Maybe people have a threshold "this is a problem" and as long as the deaths per capita are below that level not much happens. (The population has been growing only very slowly over that time.) I think something like that is true of influenza. It actually kills a fair number of people, but sort of in bursts, and people get worked up over it in a bad year when it snuffs 60,000 grannies, but then slowly forget about it over the next 5-6 years as it fades below the "threshold" -- or so I surmise.
Yeah something like that may be it. 5-10 years ago I was hearing a lot of lamenting of deaths (in young people) due to painkiller overdoses (oxycontin etal.). And we sorta dealt with that. There also must be a size effect. If ten or one hundred people die at once it's somehow a bigger deal than one person dying in one hundred different instances.
Were did you get your stat for increasing road death rates ? In France at least they have been divided by 4 or 5 since the 70's....
Yes - as I live in France I should have known that! My error - I misremembered
If it's any consolation, sometimes you just get a combination of factors that make it impossible to avoid a disaster. Even good internal controls in a company can be subverted at the top, and there were too many factors converging in this case to create a situation where SBF had no real accountability or guard rails on misconduct except insolvency and reputation loss.
Investors in 2019 had massive FOMO about missing out on the Next Big Publicly Traded Tech Company, and there were plenty of dubious firms getting big money with no accountability because if you insisted on accountability . . . well, there was another investor willing to step up, and what if HE got your big Facebook 2.0 stock payday?
Best thing you can do is try and guess whether it was really one of those circumstances, and diversify your risks and hopes.
I am terribly sorry Scott. But, I have stayed as far away from crypto as I possibly could and have told everyone I know about the problems I see.
I spent a lot of time working in the financial business and have been deeply involved in regulatory and other back room and plumbing issues. I knew and tried to warn people that crypto does not have the the systems and the people to run them that conventional financial institutions do. And those institutions are not spotless. but, there are so many overlapping (FRB, FDIC, FINRA, PACOB, NYSE, FASB, ...) regulatory authorities and pots of money that most people can spend their lives being FDH (fat, dumb, & happy) about their banks and brokerages.
This is not true with crypto. In 2008 several of the USA's largest financial institutions failed and a bunch of others were very close to going under. But, very few people actually lost money.
Crypto has none of those systems or back-ups.
You are not smarter about money than Warren Buffett, Charlie Munger, or Jamie Dimon. I know I am not. They all said crypto is not good. Believe them.
I think the people who are insisting that utilitarianism/consequentialism doesn't really tell you to violate commonsense moral constraints against lying/cheating/stealing when the upside is high, and that someone who engages in fraud to get billions of dollars to spend on effective charities is misapplying the view even if they were weighing the risks as well as they could, are not really owning the implications of their theory.
Yes, you should have some mistrust of your own ability to measure consequences, and that might give you utilitarian reasons to cultivate a tendency to e.g. keep promises even when your best guess is the alternative is a little better. And maybe that means "in most normal situations (or 'the vast majority of cases') following the rules is the way to go." But this kind of consequentialist justification for a buffer zone only goes so far, and when billions of dollars of funding for the most effective charities (and therefore millions of lives) are at stake, we are outside the buffer zone where the very general version of this point applies. The plausibility of this kind of deference to commonsense norms in cases like "should I cheat on my significant other if I calculate the EV is higher?" dissipates when the stakes get higher and higher.
I know why they wouldn't want to say it out loud, but I think what the utilitarian should think is "if someone really has good reason to think they can save millions of lives by defrauding (mostly relatively well-off) people and can get away with it, they absolutely should. If SBF was reasoning this way, then he made a mistake, not because he didn't respect simple moral prohibitions, but because he overestimated how long he could get away with it and underestimated the social cost to the movement he publicly associated with." True utilitarians really are untrustworthy when lots of utility is on the line! And they should own that consequence!
And rule utilitarianism is not a card that just any utilitarian can pull out in response to these cases - rule utilitarianism is a fundamentally different moral view - a much less popular view than act utilitarianism, with its own set of (quite serious!) problems. Most consequentialists in the EA movement are not rule consequentialists, and they can't just whip its reasoning out at their convenience - they would have to give up their moral view.
> But this kind of consequentialist justification for a buffer zone only goes so far, and when billions of dollars of funding for the most effective charities (and therefore millions of lives) are at stake, we are outside the buffer zone where the very general version of this point applies. The plausibility of this kind of deference to commonsense norms in cases like "should I cheat on my significant other if I calculate the EV is higher?" dissipates when the stakes get higher and higher.
I don't buy it. If you're at the point where you have billions of dollars to donate to charities aligned with your values, and then consider whether to commit a crime, you stand to gain billions to donate if your crime works out, and to lose billions to donate (plus immense reputational harm) if it doesn't. Plus as Scott points out, the utility of donating money is usually sub-linear (i.e. the first dollar goes much further than the 100 billionth), so even if you had a perfectly legal method with a 50% chance to double your money and a 50% chance to lose everything, that would already be negative EV.
So I just don't see how increasing the stakes is supposed to make integrity *less* important rather than *more*.
Also, this stuff is not new; Scott linked to a Yudkowsky essay from 2008 (!) on the topic. That said, while LW definitely stressed this point a lot, EA may have done less of it? From a recent Yudkowsky post on the EA forum:
> I worry that in the Peter Singer brand of altruism - that met with the Lesswrong Sequences which had a lot more Nozick libertarianism in it, and gave birth to effective altruism somewhere in the middle - there is too much Good and not enough Law; that it falls out of second-order rule utilitarianism and into first-order subjective utilitarianism; and rule utilitarianism is what you ought to practice unless you are a god. I worry there's a common mental motion (in some-not-all people) between hearing that they ought to ruin their suit to save a drowning child, instead of taking an extra 15 seconds to pull off the suit first; and people thinking that maybe they ought to go rob a bank, to save some children. But that part, about the psychology of some people who are not me, I'm a lot less sure of.
---
Finally, it's much harder for the EV calculations to come out positive if you actually care about other people or x-risk. In that regard, purely selfish people (e.g. sociopaths) have it much easier to do ends-justify-the-means calculations with positive EV, because they can more easily discount the harm they cause.
And how many examples do we have of not-purely-selfish people successfully making accurate ends-justify-the-means calculations, anyway?
Again, if the claim is that utilitarianism implies SBF shouldn't have committed crimes because risk of discovery, reputational damage, etc. makes reasonable calculation of expected utility negative, that's fine - I agree. That's not the claim I'm challenging - it's literally what I say the utilitarian should think. The claim that I'm challenging is that utilitarianism implies you ought to follow the rules even when your best calculation is that the expected value of breaking them is much better (by say, millions of lives). The utilitarian reason for SBF not to commit fraud, in other words, depends heavily on precisely the details of his case that bear on calculated EV, and not that good utilitarians should override large differences in calculated EV with simple deontological rules.
" there is too much Good and not enough Law"
That surprised a laugh out of me, am I and the great oracle actually on the same side here about the necessity of rules and following them?
A significant problem is that the rule of "follow the rules" depends on assessing that the rules are just and moral. Frequently there's a lot of evidence that they aren't, but it's often still a good idea to follow them. But how do you decide?
A significant chunk of the LW sequences is about decision theory and stuff like the Prisoner's Dilemma and Newcomb's Problem, all of which require a certain kind of internal lawfulness and integrity, with real-life analogies like being a predictably good citizen / trading partner, etc. Same with being truthseeking, not succumbing to reasoning errors, etc.
Everything I read, leads me to believe people only see Twitter for it's face value.
Am I the only person who thinks Elon Musk paid $44B for a real-time prediction market? Or perhaps its a tool to read public attitudes, or even—ala Cambridge Analytica—an opinion steering tool.
These tools are available if one only applies a little big data thinking to public perception models.
So far attempts to use Twitter to predict things, like moves in the financial markets, have not worked out great for the people working on them. By the time Twitter knows what's happening, things are already played out and you can see it in conventional indicators.
I think if he were doing that, he would be working in features affecting his ability to view or steer opinions more and less on random UX stuff
(Counterpoint: some of his paid blue check ideas might also be usable for steering public opinion).
What a crowded thread!
On the issue of FTX: Why does anyone think that crypto has enough "real" value/potential/attractiveness to justify multi-billion dollar valuations?
The first explanation I heard for crypto was that it allowed secure, costless, untraceable transactions. But, "secure" is pretty well provided by credit cards. "Costless" is a nice concept, but credit cards and banks are pretty low cost, and who pays for all the server farms generating the blockchain if nobody ever pays anything? "Untraceable" seems mainly useful for criminals, and we've seen that, if law enforcement gets serious, transactions are actually very traceable, which ought to be obvious if the whole history of every transaction is in the blockchain.
So, it seems to me that crypto is, and always has been, a fraud. Maybe not in the legal sense, where people are knowingly selling empty sacks to gullible marks. But fraud in the practical sense that there isn't, and never will be, anything behind the smoke and mirrors.
Sorry if everyone else already knows the answers, but I never seen an attempt to address these in a serious way.
It's digital cash. There are rules beyond just the law that bind how credit card/bank transfers are allowed to take place. These are often onerous, and people tend to pay each other in cash when they want to circumvent them. Cash has physical limitations, so preferably there would be a digital version of it that can still circumvent rules like "you can't withdraw more than $x without asking for permission", "you can't use this money to pay for pornograpy", and "you have to wait 2-3 days for this money to actually transfer". The high valuations are generally a function of the idea that if every person in the world used digital cash, then the digital cash industry would necessarily be a very significant portion of the global money supply.
I've heard this concept. Digital cash is a long, long way from performing this function. For one thing, it's not generally accepted, so its usefulness is very limited. For another, its value fluctuates wildly - you might be paid $100 in digital coins today and find it is worth $50 tomorrow. This might be fun if you're playing with things that don't matter much, but it would be very hard to run a business or household this way. For a third, its security seems to be uncertain - there have been a lot of implosions of crypto markets and exchanges, and some of the crypto never take off at all. Even if BTC itself doesn't implode, it will be hard for anyone other than aficionados to know how to keep it secure. There are risks with currency as well, but people at least understand these risks.
You asked why anyone believes crypto has enough real value to justify multi-billion dollar valuations. This the fundamental answer to that question. You can disagree about the feasibility, but I think it's unfair to paint the entirety of its history as a fraud.
Well, I think it's fair to paint the entirety of its history as having nothing behind the smoke and mirrors. Not that I expect to convince anyone. My financial advisor is recommending some small level of exposure to crypto. I have refused to have anything to do with it. I'm quite confident that I'm right to do so.
I think part of the original goal was to get rid of inflation and force an end to central banking. This is why there is a fixed supply of Bitcoin. It solves the issue of people using the fed to enrich themselves by hoisting unplayable debts onto future generations.
I think crypto may eventually find use as an asset less susceptible inflation, if the entry/exit points become well regulated enough to prevent leverage driven volatility.
> seems mainly useful for criminals
Yes, and?
If it’s very useful for criminals and not especially useful for non-criminals, then what incentive is there for non-criminals to use it?
If your goal is to create a widely used currency, then you’re going to need it to be used by more than criminals. At some point all the non-criminals are going to realize they aren’t getting any value out of it, and that’s when the house of cards will come crashing down.
This. There are, to be sure, plenty of "criminals" that we may find sympathetic and want to facilitate their crimes. The Hong Kong resident who wants to take his wealth with him as he flees to a less oppressive city, is a criminal by Chinese law. I'm glad that, for a while, cryptocurrencies could help a whole lot of such sympathetic criminals move money around unseen by very unsympathetic authorities.
But if that's *all* your currency is good for, then it's not going to last once the associated criminality rises above the tolerable-nuisance value in the authorties' minds. And I've yet to see the cryptocurrency that any law-abiding normie would prefer to dollars or Euros as a medium of exchange.
The authorities can only affect fiat on-ramp and off-ramp, which isn't a big deal if you don't want to speculate on normies FOMOing in. XMR is already delisted from most exchanges because it is too good at anonymity, and yet it's trivial to get it.
If I have a bunch of XMR, and I want to pay the rent or buy food or put gas in my car, how would I go about doing that without an "off-ramp"?
Trade it with some reputable counterparty on localbitcoins, for example, possibly exchanging XMR to BTC/ETH in any of the available non-KYC ways (of which there are many). Or perform the transaction in a country that has legal businesses doing such transactions.
Crypto is not for directly paying at a gas station, it doesn't scale well for that purpose. It's a way to do anonymous business regardless of your location, and a store of wealth that cannot be seized if you use it correctly. Both of these are immensely valuable.
And, the government has demonstrated that it can break the "anonymity" and use the blockchain evidence to identify and prosecute the criminals. It could also lean on banks and other regulated financial institution to make it much harder to convert between crypto and "real" money. Which implies: if crypto becomes really useful for criminals, the governments of the world have the incentive and the means to shut it down so that it is much less useful.
>On the issue of FTX: Why does anyone think that crypto has enough "real" value/potential/attractiveness to justify multi-billion dollar valuations?
Overall crypto skeptic here, but I'll try to say some good things about crypto.
"Untraceable" is mainly useful for criminals and is a net negative... in the world now. But in a worse world 'untraceable money for criminals' is probably a good thing. Whatever your political affiliation, imagine your nightmare scenario political leader taking over and criminalizing your lifestyle or political party or whatnot. If that happens I will be glad that untraceable crime money is a thing. For this reason alone I want some kind of crypto to exist into the future. Though it absolutely doesn't need to be trillions of dollars of market share - in fact crypto being mainstream likely makes it less useful for this purpose because it will be so high profile.
The stuff with smart contracts and other even weirder recent tech is, at the very least, weird enough that I'm open to the possibility of something good being there. I think it's mostly silly (or even mostly scams) but I can't rule out genuine financial innovation could happen (or be happening) that isn't possible in normal finance.
Lastly, Vitalik Buterin seems to be the real deal. So many crypto people sound like marketing bullet points or their entire conversation is about prices going up, Vitalik genuinely seems to be in crypto for the tech and ideas.
This issue with smart contracts is that they create more issues than they solve.
"Oh I don't want to pay pricey lawyers and have them meddling in my transactions with others, instead I will use this protocol where all the rules of the transaction are written into the transaction itself!".
"Umm who is going to write those rules for you?"
"people who are experts in rules and trasnaction pitfalls of course."
"So lawyers?"
"umm yes I guess lawyers"
"And um who is going to make sure those rules the lawyers wrote are actually coded correctly into this transactions and will be executed in the way you intended under the circumstances you agreed to?"
"Well I guess we will need some crypto/coding experts too"
"Ok so in an effort to escape from needing lawyers mediating your contracts you have now crated a situation where you need both lawyers and coders..."
One driver for wanting some outside-the-mainstream payment mechanisms is the use of the banking system as a mechanism for censorship, as with the Canadian protests, or earlier, the US credit card companies not allowing donations to Wikileaks.
Well, to escape government monitoring and interference, you're pretty much always better going low-tech than hi-tech. The telephone was a boon for government spying, and social media (as a way of communication) even better. It's really, really, hard to monitor what people say if they tend to say it to each other's face, or pass samizdat around. It's much easier if they transmit it over big static physical infrastructure (governments are very good at dominating fixed physical assets) operated by big companies that are quite dependent on a favorable regulatory environment.
Plus if you're in an environment where you don't trust the government, why are you trying to send money to or from recipients that you don't trust? Maybe they work for the government and you're being set up! I would think in an oppressive situation what you want is a way to transfer money to and from people *that you already trust*.
I agree this would be nice, but I think practical experience has shown it isn't actually going to be beyond the arms fo the law in this way.
We already have tons of examples of what happens if all transactions are traceable. For instance, the worldwide sanctions on Russia after it declared war, wide-scale coordination of financial services against sex work, or the situation where people donated to the trucker protest in Canada and then others wanted to trace the donations and cancel them.
Even if one agreed with all these prohibitions so far, that still leaves the problem that the values of governments can change over time, and then it would be very convenient for there to be a method of making transactions that cannot be prevented.
Here's one Twitter thread on the topic of freedom to transact: https://twitter.com/RepTomEmmer/status/1503360979278700545
Theoretically, as I understand it, crypto comes into its own in the hypothetical future that central bank shenanigans have destroyed all credibility among fiat currencies, such that Joe Average is just not willing to be paid in dollars or yen or euros anymore, but insists on BTC (or whatever), and all the stores take it because that's all anyone offers, and people live happily ever after because it is no longer possible for governments to manipulate the currency for political ends, or impose currency controls, so stuff like inflation/deflation or the decouping of interest from risk just doesn't happen any more.
The rational reason to invest in it early on was pretty much a land-rush speculation kind of choice, where you said "ah someday this worthless swamp with nothing but mosquitoes and cholera in it will be New York City and cost $100 per square inch, so if I buy 100 acres atj $1/acre right now I will be exceedingly rich in that future."
This has always been the main argument against crypto (which I've made consistently). The counterargument is something like "the existing financial system is kind of bad in various ways so crypto could potentially replace it, in which case it would justify it" (SBF's fundraising presentation included a bit about "I want our customers to be able to use it to buy a banana anywhere in the world". Transferring money like that is a legitimately nontrivial problem right now.)
Sorry to everyone if this was already pointed out, but... isn't this St. Petersburg shit just martingaling? Its flaws as a strategy have been well known since literally the 1700s, if this is the quality of thinking in EA circles it's shocking that they were even trusted with one dollar of fake money. These people need traditionalism like pagans need Jesus.
No, St. Petersburg paradox is not martingaling (contra the Wikipedia article on martingaling, which asserts that it is, citing an abysmal source). I'll give you my understanding of the difference, and someone with better math than I have can tell me where I'm wrong.
Martingaling means cutting the bet in half after a win, and doubling it after a loss. It doesn't have positive expected value for any finite number of bets, but has a positive expected value at the limit.
The St. Petersburg paradox involves a series of doubling payoffs at 50% likelihood each, but going to zero permanently at the first loss. The expected value is positive for any finite number of bets, and goes to infinity in the limit. There's no rational point to stop playing according to naive EV calculations, and yet you're guaranteed to go bust at some point - hence the paradox.
The two have totally different mathematical properties. The martingale makes no mathematical sense to even start, while with the St. Petersburg paradox, it makes no mathematical sense to ever stop. This remains an unsolved problem.
SBF seems to have had the belief that in charity, dollars are equivalent to utilons, at least at the sums he was working with. Going from $14 billion to $28 billion allows you to generate twice as many altruistic utilons, in this thesis. Of course, crashing to $0 doesn't just bankrupt you. It does what it's doing right now, which is creating all kinds of disorder for the EA movement. And it seems silly to me to claim that money doesn't have diminishing returns in real life.
Furthermore, SBF wasn't being offered any sort of St. Petersburg paradox deal. He was making money through some combination of normal market activity and fraud. The assumptions underpinning the St. Petersburg paradox do not apply in real-world finance. Nobody's offering an infinitely repeated 50% chance of doubled-payoff-or-bust.
The solution to the St. Petersburg paradox is well known: It only makes mathematical sense to never stop, if the bank is infinite. And pretty big is not infinite.
This is incorrect.
It is correct. (Or more precisely, it's one of several answers which are correct.)
(Seriously, how am I supposed to respond to a bald "this is incorrect"?)
You could try linking to a credible source supporting your statement. Your proposed solution is not on the Wikipedia article, and it's also not on the Stanford encyclopedia of philosophy's page on the subject. If I can't find it in either of those locations, I'd be surprised if it's "well-known."
As best I can understand the thinking went something like this:
1) You trust that your trading/arbitrage strategy gives you a positive EV in the long run.
2) You take bigger positions than would a trader who is worried about "gambler's ruin", a run of bad luck that wipes out your bankroll.
3) The reason you don't worry about your bankroll being wiped out (I think) is that you are merely one altruist in the game, so it is not your individual bankroll that matters but the collective bankroll of all altruists everywhere. Your personal loss is irrelevant; altruists win if they continue to make positive EV bets. (This, I think, is why the St. Petersburg Paradox can be ignored.)
4) Another reason you bet so aggressively is that the utility of your winnings grows linearly because you will donate them to charity, as opposed to logarithmically, which would be the normal assumption if you were only using your winnings on yourself.
5) In the interview with Tyler, SBF says something about assuming "independent worlds". Not sure, but I thought maybe he meant that enough bankrolls exist in the multiverse so that even a very low positive EV justifies extremely aggressive bets. (I really might be misunderstanding that one, though. He doesn't use the word "multiverse".)
1) The thing described by SBF on that podcast is pretty much the opposite of martingaling. Instead of repeatedly doubling down until you win (with epsilon risk of losing everything) you repeatedly double down until you lose (with epsilon chance of winning everything).
2) The actual St. Petersburg paradox is something else again, see https://en.wikipedia.org/wiki/St._Petersburg_paradox
"Instead of repeatedly doubling down until you win (with epsilon risk of losing everything) you repeatedly double down until you lose (with epsilon chance of winning everything)."
But... but that's transparently *even stupider*!
If God tells you that he's going to kill you (or worse) if you aren't supreme ruler and owner of all the Earth by this time next year, what else are you going to do? A whole lot of double-or-nothing bets at least give you that infinitesimal chance - and it will be exciting and fun while it lasts.
An awful lot of EA, possibly including SBF, seems to believe that the AI Gods will consign them all to bignum simulated Hells unless they manage to, not quite rule the world, but at very least exert a substantial influence over the computing industry in the next few years - and that's probably going to take many billions of dollars. If you don't have that sort of money, what else are you going to do?
You have to be very smart to be this stupid, which is why firms that hire on people familiar with all these paradoxes and thought experiments and maths problems have old guys in charge who can and will put a stop to gambling with the funds.
The FTX/Almeda problem was that there weren't any old guys in charge who wouldn't know a logic problem if they tripped over it, but do know how markets work in the real world.
I still feel like it's obviously acceptable to engage in unethical activities for the greater good, i.e. defrauding investors in order to send their money to deserving causes. The lesson here is that it doesn't serve the greater good to do it in a really obvious way that gets you instantly caught.
You can never be sure that you've figured out what will happen, so certainty is unjustifiable.
That said, if an activity is unethical, then it *is* unethical.
The catch is that what is "unethical" is highly context dependent. And outside observers may well have a different opinion of it than you do.
Consider: What is the difference between a soldier defending his homeland and a terrorist?
OK, what about the Cossacks raiding Lidice, were they loyal soldiers honorable following orders or brutal terrorists? On what basis are you deciding?
Yes, that's an intentionally extreme example, but I would predict that were you to have asked one of the Cossack troopers if his actions were moral and ethical he would have said "yes".
If you don't like that, consider George Washington, noted traitor against Britain. What about the famous British General, Benedict Arnold?
The problem isn't that it's sometimes justifiable to engage in unethical behavior, it's that it's often quite difficult to define.
I think you can make entirely plausible arguments for this sort of thing in the abstract, but that in practice the unethical behavior is then applied far more widely/loosely than the original argument implies. As an example, one justification for the US torture program in the war on terror was a utilitarian/consequentialist one involving a nuclear terrorist. And yet, the way torture was apparently actually used was very different, and much wider, than was imagined in that original justification. We were not torturing terrorist masterminds who'd planted nukes in Manhattan, we were mostly torturing nobodies who might have some information we wanted, or torturing terrorists already in custody for months or years to get evidence in other cases.
I think this is the usual situation. There are indeed weird edge cases where it would be morally acceptable for me to cheat on my wife (involving her being braindead, or time travel, or some other bizarre scenario you can spin up). But if I start working up moral justifications for cheating on my wife, the most likely bet is that I'm going to end up justifying infidelity of a more standard "sleep with your hot coworker" type.
"I still feel like it's obviously acceptable to engage in unethical activities for the greater good"
To which I can only reply with a quote from "The Blue Cross":
"Reason and justice grip the remotest and the loneliest star. Look at those stars. Don't they look as if they were single diamonds and sapphires? Well, you can imagine any mad botany or geology you please. Think of forests of adamant with leaves of brilliants. Think the moon is a blue moon, a single elephantine sapphire. But don't fancy that all that frantic astronomy would make the smallest difference to the reason and justice of conduct. On plains of opal, under cliffs cut out of pearl, you would still find a notice-board, 'Thou shalt not steal.'"
"Do evil that good may come of it" is not permissible, and the ends do not justify the means.
From "The Flying Stars":
"Men may keep a sort of level of good, but no man has ever been able to keep on one level of evil. That road goes down and down. The kind man drinks and turns cruel; the frank man kills and lies about it. Many a man I’ve known started like you to be an honest outlaw, a merry robber of the rich, and ended stamped into slime. Maurice Blum started out as an anarchist of principle, a father of the poor; he ended a greasy spy and talebearer that both sides used and despised. Harry Burke started his free money movement sincerely enough; now he’s sponging on a half-starved sister for endless brandies and sodas. Lord Amber went into wild society in a sort of chivalry; now he’s paying blackmail to the lowest vultures in London. Captain Barillon was the great gentleman apache before your time; he died in a madhouse, screaming with fear of the “narks” and receivers that had betrayed him and hunted him down."
Chesterton was wrong though. God isn't real. All the atheists and materialists he makes fun of in his stories were actually just correct in their philosophy. Also he was fanatically racist even by the standards of his day, which suggests to me that he's morally unreliable.
I think the counterargument presented above is at least partly "if you think think you're doing the good version where it won't blow up, assume that you're wrong and you're actually doing the bad version where it will."
Probably true but hard to prove, as there's no way to know how many people commit massive amounts of fraud and don't get caught. Maybe it works 99 times out of 100 but we only hear about the guys who fail!
Should you ever take any kind of risk without assuming you're misevaluating its safety and actually it's going to blow up in your face? If so, what makes "unethical activity for the greater good" type risks deserve to be treated differently?
"Should you ever take any kind of risk without assuming you're misevaluating its safety and actually it's going to blow up in your face?"
I think that is actually a pretty good rule of thumb. "Measure twice, cut once" and the like advice. No matter how expert you are, there is always the chance something can go wrong, and if the results of "this going wrong" are losses in the billions, then you should be even more cautious.
If Bankman-Fried had taken clients' money and punted it all on the favourite at Haydock in the 3:30, it would be patently visible how stupid this was. Gambling on crypto is still gambling.
I think my claim is that there are certain rules it's inadvisable to break, whether for the greater good or for your own practical benefit.
For example, "never go cave diving unless you are a professional cave diver". We implement these rules not just because they're good ideas, but because a normal person evaluating the situation rationally is very likely to make mistakes. You think "a cave, whatever, how bad could it be, I'll just go in and out", and then apparently according to everyone who discusses this topic you get disoriented and die.
I don't think all risks are like this. I never hear any rules about "don't walk into caves on land". There are just some risks where everyone agrees you're going to make mistakes unless you follow the time-tested advice.
I think this is useful for personal things where you don't want to die (like cave diving), but extra useful for moral risks where you don't have the right to risk other people's safety and so we would want to agree on a compact where no one does this.
Is that the kind of thing you're asking about or am I missing your point?
> I never hear any rules about "don't walk into caves on land".
If you need some nightmare fuel, google the Nutty Putty Cave, then look at YT videos of people going through similar caves. To be fair, that does look like a terrible idea immediately, but it's a lot more terrible once attempted.
I think I'd still just trust expected value assessments in the cave example. But since I don't know anything about cave diving, my assessment would mostly rely on received wisdom.
If I was some cave scientist, and developed a new, well tested, theory on cave diving risk, and the cave was full of gold, I'd totally ignore the rule.
If the cave had "a chance to solve all the worlds problems" at the bottom, I should go in even with near certainty of death.
The stuff SBF was doing looks a lot like the kinds of risks that all the really big financial institutions did originally, to get big in the first place.
I don't know what FTX's end game was, but it could have been something super ambitious, like "replacing all established currencies with SBF-coin and taking over the world", which would be totally possible if you took on enough risk and got lucky enough. Which would easily be a big enough pay-out even at 0.01% chance of success.
I think this FTX / SBF has a lot of parallels i the 19th Century banking business in the USA.; give us your gold. We will give you a bank note. All your gold is stored in a nice safe building and you don’t have to worry about stashing it away. You can just carry around this piece of paper and use it to buy things.. it’s not a bad idea, but it sure malfunctioned a lot.
"If I was some cave scientist, and developed a new, well tested, theory on cave diving risk, and the cave was full of gold, I'd totally ignore the rule."
And then some poor son of a bitch in the aqua rescue squad would be tasked with getting your bones out of there.
OTOH, he may end up finding a whole lot of gold....
If anyone is still sleeping on Star Wars Andor, please give it a try. Fans of 'Tinker Tailor Soldier Spy' in particular. So much juicy Star Wars bureaucracy and just plain day-to-day life. Action scenes are used sparingly but when they hit, they hit hard. Blasters never felt so deadly.
It's fun in a lot of ways, but as someone who's been watching in when exhausted after work I'm having a lot of trouble keeping track of what the hell is going on with a lot of the threads.
Sometimes I have trouble watching it after a tough day because it's such a bleak and depressing world to mentally exist in after a tough day of work where I'm already stressed or sad. Though once I start an episode I'm totally riveted.
And so far they keep pulling back on the darkness to keep me bought in. Often more in the setting than in main characters. Ferrix isn't a den of scum and villainy, it's a community where neighbors check in on the old woman who never turns on her heat. Prisoners try to cover for and protect an old man who can't keep up the work quota. It's weirdly hopeful amidst some grimdark bits in other places.
EA needs to deal with the fact that its sociopathic framework will attract sociopaths.
I call it sociopathic because it frames problems and solutions in a very utilitarian maximization approach without much regard for individual emotions from others. This might be even the best framing for some problems. But, people with sociopathic tendencies will be attracted for such framing. And these sociopaths are the ones most prone to in the end do egotistical stuff that harms many people.
EAs have thought about this already. See https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous
But I would add that I think there is a large amount individualism in EA, even though most of the things EAs care about are better solved through collective action.
You mean those that aren't already in finance?
I agree it's more likely to attract sociopaths, but not so much as adherents but rather as predators. People who have strong beliefs in their own powers of reasoning and perception are better than average marks for con artists. That's one reason why the long con almost always begins with flattery. ("Now some people are so stupid they can be taken in by fraud, but you sir I can see are made of better stuff, you would spot a con in an instant. So let me tell you of my golden 100% honest fabulous opportunity, which only people with your giant brain can even comprehend...")
That's called the Kansas City Shuffle. It depends on getting the mark to work out that a con is going on, but keeping the mark ignorant of the real con. While the mark is thinking they're so smart and are going to walk away with all the gains, you are milking them like a cow:
https://en.wikipedia.org/wiki/Kansas_City_Shuffle
"In order for a confidence game to be a "Kansas City Shuffle", the mark must be aware, or at least suspect that he is involved in a con, but also be wrong about how the con artist is planning to deceive him. The con artist will attempt to misdirect the mark in a way that leaves him with the impression that he has figured out the game and has the knowledge necessary to outsmart the con artist, but by attempting to retaliate, the mark unwittingly performs an action that helps the con artist to further the scheme.
The title refers to a situation where the con man bets the mark money he can't identify what state "Kansas City" is in. The mark, guessing that the conman was hoping to trick him into saying Kansas, identifies Kansas City, Missouri as his answer. The con man then reveals that there is a much less well-known Kansas City, Kansas meaning Kansas was actually the correct answer."
Famously, this is alleged to have been true of Madoff's scam--very sophisticated investors had funds with him, and it seems like they must have known his returns were implausible, but they probably thought he was doing some unethical thing to someone else (frontrunning trades or something) and furnishing them money. A bunch of them ended up losing a huge pile of money.
Are sociopaths actually disproportionately represented in EA? You're stating people need to "deal with the fact" of something not actually established.
Isn’t the entire FTX debacle pretty solid evidence of sociopathy?
Even if one granted an instance of sociopathy, that would not tell us what the relative rates are in EA vs the general population.
True, but we don’t have a way to get that information. What we do have is a very prominent example of sociopathy right in the leadership of EA.
I took the top comment to mean that EA is especially vulnerable to sociopaths, and in that framing the FTX scandal supports the claim.
It does not establish the claim as a "fact".
Sure, but surely it’s enough for us to make an update on our priors towards “maybe EA should think about how to avoid accidentally giving a lot of money to sociopaths.”
No. Sociopathy is a specific combination of traits, it doesn't just mean "bad", and I expect most large-scale frauds are committed by non-sociopaths.
I disagree. Sociopathy, as it’s generally used, is characterized by an incapacity to feel empathy. People who can intentionally do bad things without feeling any remorse or sympathy for their victims are, if not outright sociopathic, at least exhibiting signs of sociopathy.
And I strongly suspect this applies to most people who commit large-scale fraud.
I think people who truly had strong empathy for all humans would be more likely to do this sort of fraud. What protects us is that normal people have more empathy for the people near them than the millions of distant strangers who are deeply suffering.
And even that may not protect us. Robin Hood is usually seen as an empathic character. He personally knows the poor people relying on him better than the rich he robs, so empathy compels him towards crime.
It think this sounds nice, but ignores how actual human psychology works. That is also what I believe Daniel C was alluding to in the top comment.
There are plenty of people who, in one way or another, fight a group on behalf of another, and I don’t think that by itself requires sociopathic tendencies. People in those situations simply dehumanize their opponents in their mind. A Robin Hood type or a freedom fighter would not actually have non-hostile interactions with their opponents.
I don’t think, psychologically, you would be as likely to get the same effect with fraudsters. They have to lie to the face of people who come to them as allies, and they have to do it over and over for a very long period of time. It’s much harder to dehumanize someone who has done you no wrong and wants to be your friend, especially when they trust you deeply enough to give you huge sums of money.
Humans aren’t pure logic machines. If you are a genuinely empithetic human being, performing this fraud would be insanely difficult. No matter how “logical” it was to rip people off your empathy would stop you. Nor would this empathy be something you could just turn off at will; you would have to be constantly fighting against it. Very very few people would have the conviction to do something like that.
Now, if you didn’t have empathy, that would be a lot easier. That’s the point.
And there’s another piece to this, and it’s a big problem for EA: EA people tend to say things like “people who truly had strong empathy for all humans would be more likely to do this sort of fraud.” If you’re the kind of person who would commit this kind of fraud, then the EA community is a great place to find yourself. There aren’t many other places where you can act blatantly sociopathic and still be lauded as an incredible person for doing so.
"The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”."
Scott, in light of your previous Contra Resident Contrarian post, please forgive me if I find this a little bit ironic. I am sorry you and other people are suffering because of this. I hope any damage can be minimized. But perhaps it might be time to update toward being a little bit more of a skeptic when it comes to unverified claims that other people make?
"Trusting people to accurately report their internal mental state in a medical context is good" =/= "Trusting people to accurately report whatever regardless of incentives is good". I would have hoped that on a 'rationalist' forum this distinction would be obvious.
Keep in mind that cryptocurrencies exist because of people who don't trust the normal fences, safeguards and institutions of our society. You buy bitcoin because you don't trust The Federal Reserve to responsibly fight inflation, or because you want to skirt the law and buy heroin or something else currently illegal.
Trust itself tends to be a two-way street. You should be more wary of people who themselves have low levels of trust in others. Therefore, losing trust in humanity because some crypto-traders turned out to be conmen is like losing trust in humanity because a heroin dealer in a dark alley ripped you off. Maybe put your trust in people living on the more central boulevards of society than in a counter-culture that has recently emerged under its bridges because they have renounced those boulevards.
And this is why I haven't touched crypto with a 10-foot pole. Not surprised by the crash, but sorry for lots of well-meaning people who are now feeling the fallout.
I think this is an unfair swipe.
Suppose I was arguing with a Holocaust denier, and I said I believed all the eyewitnesses and expert historians, and explained that it's dumb to doubt a bunch of honest-seeming people, and he said that was stupid.
Then some crypto people who I thought weren't doing frauds turned out to be doing frauds. Does the Holocaust denier have the right to say "Ha! You proved that you're too trusting, so now you have to admit I was right"?
Probably I need to change something in my belief net somewhere, but this shouldn't license people to assume it's whatever helps win their particular argument with me.
I think the analogy would be more accurate if you took the side of the Holocaust denier in the argument, because in the "Contra Resident Contrarian" post, you are mostly siding with people making claims that go against the mainstream opinion and/or the preponderance of evidence. I interpreted your post as an argument that reports of unusual mental states should be believed as a default, even if these reports go against mainstream opinion or the preponderance of evidence. Extending that claim to the Holocaust denier would imply believing that the Holocaust denier is genuine in his belief. Thus, even if you rightly and justly disagreed with his conclusions, it would mean not condemning or judging him for it, and giving him the same consideration as you would give to a spoonie. Please explain to me where I err in reaching this conclusion.
Sorry, that reads a bit like motte and bailey, to me. Ucatione is not even implying it is unsound to accept the Holocaust or round-earth as facts for all practical matters. Might even trust wikipedia (as I do). Still: Science and even wikipedia do err at times, but even they do NOT see it as solved that crypto is safe&great, AI will change everything and EA is the best way to do most good - or that most could get really, really high on meditation if they just tried a bit more. Not all Thais are monks.- Street-wisdom solved: who asks for bucks "to buy a ticket" is a beggar ( I would not say fraud, cuz we all know, if we cared to know.), - so is the young mom with baby in Calcutta who asks you to buy her a can of baby formula at that kiosk. (She will sell it back when you left.) - It is possible even some higher-ups on FBX were not aware of anything wrong. More likely they thought those leverage-deals (or whatever was going on) as super-smart and legit. Maybe SBF did, too. It worked out so well, didn't it? And as ever: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” Upton Sinclair - You may find fine people at the FDA even, "trust" them alright, just do not place bets on that. Or do, but yeah: check the odds. ;)
How will you FURTHER update if the stories that Ukraine funding Congress passed was laundered back through FTX to Democratic campaigns turn out to be correct?
What is the proof? I did due diligence research on Twitter, and there are no real proofs of anything, just miriads of accounts posting how it's "Confirmed". The only facts seem to be that FTX converted crypto to fiat for Ukraine and FTX donated to dems (measly $40m), and those facts do not imply anything. Other conjectures are given without proofs. Based on how heavily it is being promoted on Twitter with the gutted misinformation team and Putin-friendly owner, it looks like just another "bioweapon pigeons" psy-op du jour.
I think we should hold off on the conspiracy theories. Right now there are a lot of rumours swirling around and some lurid readings of the situation. Jumping in with political bribery rumours isn't helpful.
I haven't been following this. How would you tell the difference between this version, vs. "Congress funded Ukraine because they support Ukraine, Ukraine invested in FTX because it was a fast-growing company, SBF/FTX donated to the Democrats because he liked Democrats"?
According to this Ukrainian official, the Ukraine government received donations in crypto and used FTX to turn them into normal money, but never "invested" in FTX: https://www.coindesk.com/layer2/2022/11/14/ukrainian-official-refutes-ftx-ukraine-money-laundering-rumors/
The claim is that the money Ukraine invested in FTX came out of the money Congress gave to Ukraine, rather than that money being spent on aid and weapons. Of course, dollars are fungible, and it’s possible Ukraine just decided to park the Congressional cash in FTX rather than a regular bank; but given previous “10% for the big guy” Ukrainian transactions, it’s worth looking into.
Well, first off, most of the US aid to Ukraine wasn't "here's some money, go buy weapons", it was "here's some weapons, don't waste your time in the market, just get back to killing Russians". Those weapons are kept on the US books as assets with a substantial dollar value, so you'll see news stories about how congress voted to give Ukraine another X billion dollars in aid, but that's not billions of actual dollars.
Second, how much *money* did Ukraine actually invest in FTX? I believe that at least some of the aid Ukraine has received (from anonymous foreign sources, not the US congress) came in various cryptocurrencies, but the people who sell things Ukraine needs don't take bitcoin so they'd want to use some sort of crypto exchange to turn that into hard currency ASAP. But did they actually give serious amounts of real money to FTX?
Third, there doesn't seem to be any evidence for any of this.
Fourth, that third point should have been the zeroth point, obviating the need for any of the rest, but here we are.
> Ukraine invested in FTX
where is proof for that?
What's the best source which focuses on how to tell whether it's just fungibility or something more serious?
I don't think there are any sources. Just people drawing connections where they wish there were some because it would help their tribe.
Without any sources it is not "drawing connections", it could be wholesale inventing connections.
Hold on a moment. I think we need a source for anything having happened at all before asking for a source for the reason. I would find it highly surprising if the Ukrainian government were putting any cash in FTX for any reason.
This question is posed by the person above you as a shadowy conspiracy but isn't it what happens in politics all the time?
I don't know if the allegation is true, but if it is, it would hardly be the first time that someone who gets money from the government via some policy (directly, in the sense of "government spends money on thing made by company X" or indirectly, like "government relaxes (or imposes) regulations that cause company X to save money at the expense of the public"), gives money to the politicians who supported the policy.
This seems like a worse version because it only makes sense if Ukraine was knowingly investing in a ponzi scheme.
>5: In light of recent events, some people have asked if effective altruism approves of doing unethical things to make money, as long as the money goes to good causes. I think the movement has been pretty unanimous in saying no.
I haven't had significant interaction with the EA movement, but I *have* read The Most Good You Can Do - AFAIK fairly close to an EA Bible - and I remember thinking that Peter Singer leant quite heavily into this. I mean, sure, he didn't advocate fraud IIRC, but stock-market speculation (which he did advocate) is basically zero-sum and as such it boils down to a coat of rationalisation over "rob people and use the money better than they would"; it's a very short hop from that to the thinner coat of rationalisation that is "defraud people and donate the money to charity".
So, y'know, maybe the movement as it now exists mostly disclaims fraud, but it's not surprising that fraud shows up when it's a fairly-reasonable conclusion to come to from a principle outlined in one of EA's founding texts, and I consequently suspect "unanimous" is an overstatement (like, sure, the people who think fraud in the name of charity is great probably aren't going to *say it in public*, because lol fedposting, but they're around).
(I have actually made the "Singer advocates robbery" argument a couple of years back, but I hadn't thought of the fraud part before so feel free to discount that part as hindsight bias.)
I think there's an important difference between winning zero sum games against people who've consented to play them (like in stock market speculation and other gambling) vs stuff like fraud, robbery, etc.
There is a very big difference in my view between trying to win a locally-zero-sum contest with positive externalities (or even neutral externalities, making it actually-zero-sum) and defrauding people, which is negative-sum. Someone who tries to win a contest for money (or prestige etc.) is not actually a "very short hop" from defrauding people for money, as should be fairly obvious from all the non-fraudulent contest-winners (and even more non-fraudulent contest participants, who aren't selected for having cheated).
Playing zero-sum games is not unethical in the same way, and I might even say not unethical at all.
Everyone's allowed to participate in a race, or a lottery, even though by participating you decrease others' chance of winning directly. It's not robbery because everyone participating chose to do so with the knowledge that the game was zero-sum, and took their chances.
Of course, it's better to do positive-sum things, but winning a zero-sum game against other willing participants is not 'robbery' even if it does reduce the amount of resources avaiable to them.l
Wishing everyone the best! If you’re among the folks impacted, remember to get yourself health resources you need. Positive investment in yourself now is a positive investment in all your future projects.
Here's a video I found interesting, about three thinkers who have influenced Putin's ideology: Ivan Ilyin, Lev Gumilev, and Carl Schmidt. The author's discussion of "Russian Lawlessness," the notion that the law in Russia has always existed in the service of the powerful, and never as a restraint upon them, is particularly worthwhile.
https://www.youtube.com/watch?v=sdFtqa54TuM
The author also mentions Alexander Dugin, who often comes up in modern Kremlinology, and argues that Dugin has not been particularly influential. He is more a popularizer of ideas than an originator of them. Ilyin, Gumilev, and Schmidt are the actual sources of the ideas driving the modern government of Russia.
I'll take a listen. But sounds broadly accurate. Dugin's schtick is to tell the west he's prominent in Russia (which he is somewhat) and then to turn around and leverage that into being important in Russia. He's not that important to Putin's ideology or that prominent among regime supporters (though he has his supporters among Putin's supporters).
Putin is particularly fond of Ilyin and Gumilyov. I've heard some references to Schmitt but that's more popular with Xi. He's apparently particularly fond of the Nomos. There's not a ton of difference but I suspect it shows Xi's Marxist upbringing that he favors Schmitt who's relatively legal-materialist in his outlook. Meanwhile Gumilyov and Ilyin are more, for lack of a better term, national-spiritualist. Particularly Gumilyov.
ETA: I have a lot of nits but no major disagreements. I also didn't learn much though. Not sure quite how to convey this. Nothing said is wrong exactly and the general direction is correct but there's a lot of small details that add up to some notable oversimplifications imo.
This extract is from a local (Australian) paper today:
Shockwaves from FTX will be felt around the world, with FTX’s 1.2 million customers, including those locally, now realising they never owned any of the bitcoins or other digital currencies acquired through the exchange.
Among other things, FTX essentially sold a “paper bitcoin” or an IOU from the exchange, which barely had any asset backing. As investors tried to close out their funds, it became clear that nothing was there. In addition, it became a major custodian for start-ups to hold their cash raised in funding rounds. FTX is understood to have offered to back a number of start-ups financially if they used its facilities.
I'm crypto naive. Is this correct? If so, how is it that nobody tried to withdraw their holding over the entire lifetime of the fund to put it somewhere else? Or did they, and the bitcoins magically appeared (purchased on other exchanges) for these infrequent cases, so nobody was any the wiser?
Not sure if I understand what you're asking, but presumably FTX had a lot of assets/cash, but just not every single asset/cash. So if one guy wants out or some small percentage want out on any given day for random reasons, they can handle it. But when everyone wanted out, it all crumbled.
I remember reading a few years ago (I think it was probably this: https://www.schneier.com/blog/archives/2018/06/regulating_bitc.html ) about how in the traditional-money world, there are gold merchants, where you can buy a specific bar of gold and they will track that you personally own that specific bar in that vault right there, and then there are banks, which write down how much money you've deposited but don't keep track of the specific physical bills you handed them and in fact loan most of that money out to other people so they only have a fraction of it on-hand at a given time.
And in the crypto world, most people assume that exchanges operate like the gold merchants, where there's a specific bitcoin in a wallet with your name on it, but that most of them actually operate like banks, where they just have a ledger saying HOW MANY bitcoin they owe you, and at any given moment they probably don't have enough bitcoin to pay back everyone simultaneously.
(And yet they generally don't follow the laws that real banks are forced to follow in order to protect their customers, and don't have FDIC insurance backing up the deposits. I've been vaguely expecting that cryptocurrency exchanges are going to gradually repeat all the banking disasters that lead to current banking laws, unless something else killed them faster.)
You CAN create your own crypto wallet with your own cryptographic keys that you generated and managed yourself, but hardly anyone does.
I know that for Bitcoin, there's actually a limit on the number of worldwide transactions per unit time, and people bid on them in order to be able to use them, and so moving money from one wallet to another wallet actually costs money every time you do it, so there's an incentive to do it as little as possible. Not sure how many cyptocurrencies that's true of.
The problem with FTX is not so much that fractional reserve banking is necessarily bad, it’s that they said they didn’t loan out funds, loaned them out anyway to bail out their crypto hedge fund, and then lost it all. Banks are pretty restricted in what they’re allowed to invest in. FTX mostly just committed big time fraud and blew up
This is more-or-less true of all cyptocurrencies, though Ethereum is experimenting with ways to post verifiable-summaries of off-chain transactions that allows for better scaling (but I'm not sure how much better in practice)
At some point FTX transitioned into a Ponzi scheme but it still had lots of assets so as long as the amount of withdrawals is low the fraud can continue for a while.
Omnibudsman just posted a great writeup on cognitive gaps between socioeconomic and racial groups:
https://omnibudsman.substack.com/p/cognitive-disparities-develop-early
I was particularly surprised by the effects of maternal stress in-utero on IQ. Haven't dug into the source studies yet but curious what y'all think
I ran across an interesting take a while ago in a linguistics textbook about a factor that I think plausibly contributes to the worse cognitive skills of lower SES people. It was a study of how mothers talked to their small children (maybe age 3-4?) when asked to explain a task to them. The task was something like to go through a box of different colored blocks and put all the red ones in a bowl.
So the high SES mothers sounded something like this: "Becky, see this bowl?" [waits til Becky nods] "OK, so the blocks in here are a lot of different colors, right?"[waits til Becky nods]. "So I want you to look inside and find a red one" [ Becky rummages in box and pulls out an orange one ] "Actually, Becky, that block is a little bit red, but it's not a really red red. That's an orange block. Can you find me a red block that's really really red?" [Becky takes out a red block]. "That's right! Now that is a red block. Can you put it in this bowl over here?" [Becky does] "OK, you're doing great, so now your job is to go through the box and find ALL the red blocks and . . ."
The low SES mothers sounded something like this: "Becky, take the red blocks out of the box. . . .. Becky are you listening? Put the dolly down. Take the red blocks out of the box. . . . No that's orange. THIS is a red one. So take all the red ones out of the box . . .Becky! You're not paying attention. Take all the red ones out of the box and put them in this bowl."
So the high SES mothers were putting a lot more effort into engaging their child's attention & breaking the task into parts & making sure the kid understood each part before moving on to the next one. They also gave more praise and less criticism. I'm pretty sure that if you took pairs of kids whose IQ was the same, and taught half them the red blocks task in high SES style and half in low SES style, the first group would end up being better at the task. Seems quite plausible to me that after a few years of living with high vs low SES style instruction the 2 groups would differ quite a lot in cognitive skills.
So why did the 2 mom's differ so much in their style of instructing their kids? Author's theory was that high SES moms are preparing their kids for high SES lives, where they can expect to understand work tasks and have a positive feeling about their job. Low SES moms are preparing their kids for blue collar jobs, where they are expected to obey, and to get in trouble if they do not.
Or maybe "high SES" is just a way to say "smarter."
Yeah, I agree that the study I described does not offer any way to disentangle the communication style of higher SES people from their intelligence from the kinds of life experiences they've had. As a lover of language, I think I just find the info in the study intrinsically interesting, even though it doesn't Settle the Question of how and why social class perpetuates itself and why higher SES people test smarter.
I do, though, think that in the case of this task, getting your small child to separate out the red blocks and put them in the bowl, it's not very plausible that differences in intelligence account for the differences in communication style. The task is so absurdly easy for any adult that even the dimmest parent is not going to find understanding and describing it to be challenging, right? You can say that it takes intelligence to realize your child will perform better if you break the task into steps, make it fun, praise their successes, etc., but while I'm sure it's likely that the high SES parents had read more books than the low SES parents advising them to communicate with the child by keeping things positive, breaking tasks down into manageable parts, etc. I don't think it's reading the books (proxy for intelligence) that explain the difference. My parents hadn't read any of those how-to-build-a-smart-and-happy-kid books, but they just naturally talked to me the way the high SES parents did in the study, and I also found it natural to talk to my daughter that way. So why do higher SES parents communicate in way more likely to lead to higher and happier performance on the tasks parents set?
-Comes naturally because it’s likely to be the style they grew up with
-Less stressful life so more patience (note that high SES parent has to stop a couple times to make sure she has Becky’s attention, and once to help her understand red & orange similarities and differences)
-Less reason to fear their child will have a bad life. They don’t know anybody who’s ended up in jail or as a truck stop waitress. Less fear for their child’s future makes them less likely to become impatient and worried when kid doesn’t pay attention or makes a mistake.
-High SES expectation that in coming years child will often be in settings that treat them in ways similar to the way the high SES parent treats their child during the block task: respectfully, etc. Whereas low SES more likely to expect child to end up in setting that bosses child around, demands attention and obedience rather than setting up a situation that facilitates it, punishes inattention and disobedience harshly.
I don’t think the intelligence of the parent affects communication in this task directly — though of course it does indirectly (higher SES is correlated with both less stress and higher intelligence).
Good communication is in no small part the art of imaging accurately your listener's state of mind and adjusting to it. That is especially true for instruction. What could be more diagnostic of raw intelligence than a superior ability to imagine accurately what is going on in the mind of another?
I'm sure all the environmental factors you mention have an effect at the margin, but my WAG is that the primary driver is mother's level of attachment, and the secondary is mother's intelligence[1], in particular her ability to accurately model what the child is thinking.
And if there is a correlation with SES, I would start off assuming it runs the other way, meaning a mother with these intrinsic gifts of nature would prosper relative to mothers who did not.
--------------------
[1] I'm dubious about the whole multiple-intelligences thing, but I'm willing to concede this is at least a different....facet of intelligence, perhaps. I've known mothers who were very bright in well-recognized ways (e.g. could solve differential equations as well as anybody) but who had a hard time "connecting" in this way to their own child. But maybe there was a personality defect that got in the way at that particular task? The rate of personality defects alas seems to rise with intelligence once you get well past the mean.
Maybe it's the way teachers spoke to them when they were kids? Some schools/kindergartens may do the patient, painstaking 'break it down' approach, where they have small(er) class sizes and more staff to interact with children who were trained to take such an approach, where I'd expect that to be higher-income schools.
The 'sit up straight, pay attention, why can't you follow simple instructions?' interaction I would expect from lower-income schools.
I'm doubtful because of cultural differences. Sit up straight, pay attention, follow simple instructions until you demonstrate you're ready for more is a good description of education even of very young children in Asian cultures, and nobody would suspect Asians of turning out poorly educated children. So I don't think any learned style of interaction is the key. I think you can generally teach like a Montessori tutor or like a martinet, and probably get equally good results -- provided you are very aware of how well you are getting across, and you modify your speed, style, timing, et cetera to maximize the results.
That is, I think awareness of the student state is the key to almost all good teaching, and I am rooting that awareness (for children) primarily in attachment and intelligence of the teacher (the mother in this case). I don't doubt that various techniques one can learn can nibble around the edges of the qualiity of the results, but I doubt they touch the core.
And parenthetically I would be cautious about overinterpreting correlations between style and outcome in teaching. I've seen very successful and very poor teaching outcomes in almost every style imaginable, so I'm dubious about the significance of the correlation. Many times I think the conclusions people draw from it are like concluding that umbrellas cause rain (on account of when it's raining you see a lot more umbrellas).
Actually, I have some data about the correlation between intelligence and empathy. So there's test developed by autism researcher Simon Baron-Cohen (brother of Sasha, by the way) called the Reading the Mind in the Eyes test. For some reason, you can take it on Amazon, & Amazon even scores it for you. It's kind of fun to take, and it's here: https://s3.amazonaws.com/he-assets-prod/interactives/233_reading_the_mind_through_eyes/Launch.html
Anyhow, this test, in its revised version, is well thought of, and people diagnosed with autism do in fact get much lower scores on it. So there were a couple studies that investigated whether scores on the Eyes test correlated with IQ. One of them looked at correlation in kids and teens with Asperger's, and found a correlation of about 0.35, significant but pretty modest. (To figure out how much of the variance in the Eyes test score that IQ predicts, you square the correlation coefficient, so only about 10% of variance).
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8039142/
The other looked at correlation both in normals and in Aspies and found no correlation:
https://depts.washington.edu/uwcssc/sites/default/files/hw00/d40/uwcssc/sites/default/files/Mind%20in%20the%20Eyes%20Scale_0.pdf
So this doesn't support half of your view, which is that mother's intelligence, as manifested in her ability to accurately model what her child is thinking, accounts for the difference in communication styles.
It doesn't contradict it either. You'll note I put ability to connect -- which is almost definiionally what Aspies lack -- at #1 and intelligence at #2. You're not going to learn anything about the importance of #2 if you compare groups that vary drastically in #1.
Edit: that's an interesing test, thanks. I ended up bang-on average, so I guess I'm neither a super empath nor Asperger's. One thing I found interesting is that I don't think the eyes themselves were that useful, I got more out of the head tilt, the position of lids and eyebrows, muscles around the eyes, et cetera.
Very interesting! But man that last paragraph was depressing.
Or perhaps low SES moms are teaching their children like that because they've never seen an alternative? How do you think they were treated as children? Poverty is about cultural capital as much as material capital.
Yeah i am equally skeptical of the claim in the last paragraph. Seems like this is a learned behavior between generations. I think they have the causation backwards. Maybe because the high SES children "can expect to understand work tasks and have a positive feeling about their job" due to the way they were taught, they are more likely to remain high SES and the opposite is true for low SES.
The author's theory doesn't necessarily require a conscious decision on the mothers' parts to prepare their children for those lives. So this still comports with a cultural capital explanation.
That theory predicts that wealthy families who hire nannies for their children will have worse outcomes. I doubt that is the case.
I don’t know what it predicts about that. Seems like prediction would depend on how much time the kids logged with the nanny, and what nanny’s background was. Many that I have met were 20-something women from Europe, and did not seem much at all like blue collar Americans. Would not surprise me if many had a college degree and families that were middle class or better.
Then it would predict that outcomes would vary based on the education of the nanny :) Something I also doubt.
Seems to me you're nit-picking. As I expect you know, there's a lot of variation in how much time the nanny spends with the kids. I'd say 30 hours per week is about average in the nannied homes I'm aware of. The kids also spend quite a lot of time with their parents, plus of course the parents are more important to them than other adults, and the impact of parental communication is probably greater than the impact of an equal number of hours communicating with the nanny. In any case TGGP, I did not suggest that this study settles once and for all the question of why high SES people test smarter. I said it was interesting. I still think it is.
I honestly did not really gain anything from that post. Of course, most of the time the truth is boring. But feels like this point is pretty well-accepted?
Perhaps you would gain something my comment on there.
I’m hoping to travel to Switzerland over the summer, and was hoping to at least pick up enough of the local language to at least politely ask if they spoke English. My understanding is that the “main language” is Low German, which is apparently a little different than High German, which is commonly taught in America.
1) Is this correct?
2) Is there a good beginner text for Low German to learn from for a native English speaker?
3) Any good shows I could watch in the language (ideally subbed in English)
Thanks
edit just to note that yes, I expect many people will speak english, but it still seems polite to put in an effort to learn their language
Were will you be travelling exactly ? Switzerland has three main languages, with different areas speaking French, Italian or German.
I expect that you will find Swiss people *easier* to understand than Germans. For Swiss people (in the German part of Switzerland), they consider Swiss German their native language and High German as their first foreign language. They learn High German at school, so if they speak High German (which they do for foreigners and for Germans), then their pronounciation is much clearer and to-the-book than that of actual Germans.
That is not my experience, as someone who has lived in the German speaking world for decades. Many (I would say most) Swiss Germans have strong accents when they speak Hochdeutsch. Not all of them are Hazel Brugger. Some Swiss Germans even find it easier to speak English than Hochdeutsch (or so they claim). I grant Swiss Germans will probably speak standard German more slowly than a native speaker would, that is helpful.
As others may have mentioned, Swiss dialects are a variation of “High German”, “Low German” includes Dutch and the dialects spoken, mostly moribund, in Northern Germany. That said, the various Swiss dialects are not easily intelligible even to native German speakers. Generally Swiss Germans expect non Swiss Germans to address them in standard German. It is not a culture that is particularly welcoming to outsiders and you may find that people do not appreciate you trying to break in, might even think you are mocking them. To further complicate matters the odds are that anyone you deal with in a customer facing service position won’t be native Swiss anyway and won’t speak Zürcher well. Otoh, everyone you talk in Zurich to will speak English and won’t even be offended in being addressed in English.
Huh, I was under the impression that "High" German was associated with the mountains, closer to Switzerland, while "Low" German was closer to the "Low Countries" like Liechtenstein.
The confusion is because the opposite of "High German" is not "Low German", but it is "dialect". Low German is only one of many German dialects.
It's correct that Low German ("Niederdeutsch", "Plattdeutsch") is spoken in the flat coastal area. But it's actually one of the dialects which are rather close to High German (which originated from the Hannover region, pretty far in the north). Other dialects like Bavarian or the many Swiss German dialects are further away from High German.
You also seem to confuse Liechtenstein with something else (Luxembourg)? Liechtenstein is a tiny country between Switzerland and Austria, that's about as far up in the mountains as it can get.
The confusion is because High German means two different things. It originally meant highland German, but has come to also mean standard German.
Ok, as a native German I have never heard this meaning of "oberdeutsch" as "highland German" before, but wikipedia confirms that you are right. Thanks!
Doh! I was indeed thinking of Luxembourg.
I took German classes from two different professors (one from Innsbruck, one from Berlin). Both admitted they wouldn't be able to understand someone speaking Swiss German. But a Swiss German speaker would understand standard German. If I were you I'd just learn a bit of standard German since it will be vastly easier to find learning resources for that.
I knew some Swiss girls from I think Geneva, they spoke French.
As does much of Switzerland. And one little part Italian.
1.) No. German has regional dialects of which Switzerland represents a few. But there's also dialects in Germany itself, in Austria, etc. Low German is a collective name for these dialects/languages. Switzerland and Germany both have a standard version of German called High German. While dialects can be very difference the High Germans are similar enough it's mutually intelligible. The phrase for "do you speak German" is the same in any case. Additionally, it varies by canton. In the French cantons or Italian/Romansh country German will not be used day to day.
2.) Low German is not a language. It's a collective name of multiple languages/dialects. You'd have to pick one. And, to be frank, unless you really invest it's probably not worth the effort.
3.) Any German show would be sufficient. At worst you'd pick up a German accent. The big difference is that the Swiss don't use most of the German special characters like ß and the actual accent.
When I visited Switzerland, I found that, depending on region, people didn't necessarily know the language of a region outside! We mostly operated with (very basic) German, but found a man randomly while asking some directions who was from Geneva and *only* knew French - no English OR German! (Which I know better than German, so that was lucky.)
An expensive politeness signal indeed. As a German native speaker I much prefer you talk to me in fluent English than in broken German. It is easier on both of us. Though you will find others with a different view for sure.
Also Swiss speaks Switzerduetsch (Swiss German) or French depending on the region afaik.
The "main" language is canton specific.
Where in Switzerland are you going to be?
probably Zürich
Zurich uses a dialect of German so your basic approach/choice seems sound. I don't know the exact difference or how to specifically learn Swiss Deutsch (which it is often called) but Googling for Swiss Deutsch or Swiss German will probably generate hits.
There are YouTube videos, but beyond maybe “grüezi” (hello), learning a dialect is something you either invest a huge amount of time in or just leave alone. How would someone in Queens feel if a random German walked in their bar and said “hey, how ya doin! I’m toisty heah, trow me a beah.” But with a strong German accent?
Just so I understand your lessons learned (so far):
You rated the trustworthiness of FTX on par with that of Walmart, Google and Facebook. So you’re going to setup a prediction market to help you recalibrate.
You don’t see EA leadership as culpable to donor capture by a fraud because some professional investors who allocated a fraction of their portfolio also invested in FTX. Nothing to learn about aligning with a single donor or ceding brand messaging to a single donor. Or, portfolio theory, aka diversification.
You believe the SBF / FTX circle weren’t technically polyamorous or outright smoking meth, so let’s not be unfair to them. Nothing to learn about unconventional behaviors as it pertains to credit worthiness or operating expertise or key man risk.
You think SBF only started being fraudulent at some certain point and this wasn’t a systemic or cultural failure within FTX or the industry. So nothing to learn about the broader ethical issues pervasive in crypto. Or revising our priors (or whatever it’s called) when crypto firms consistently defraud people with tacit justifications from the community like “it’s not your keys it’s not your crypto”.
The smartest kids, from the most prestigious institutions, with an elite pedigree committed a most ordinary fraud by being conmen. Nothing to be learned about the kinds of fraud committed by intellectuals and the kinds of justifications they tend to use - saving the world or whatever.
There’s approximately 100 million people hearing about Effective Altruism for the first time and it’s in the context of EA being the philosophical motivation of the largest fraud of 2022. And also, the full weirdness of EA is on display as fodder for the upcoming Netflix series and movie(s).
You don’t yet see the connection between giving attention seeking TikTokers the benefit of the doubt and giving overeducated, virtue signaling Slytherins the benefit of the doubt. And why we don’t give people the benefit of the doubt when there are consequences on the line.
You are still in the earliest stages of grief. And I think this will be an incredible moment for personal growth for EA thought leaders.
I think you're being unfair here. It's not like Scott is denying that fraud seems to have been committed. It's also not like any EAs are defending SBF.
Maybe the worst thing EAs could be accused of is taking crypto donations when they don't personally believe crypto is a valuable long-term investment. Cause it does essentially mean they're profiting from a scam.
You’re making a lot of great points here, and I also look forward to reading along as he thinks through them over time.
That’s part of why I think it’s essential that there eventually be some structure to that reflection. The Open Thread post as a way to make it seem less real (or perhaps to share initial thoughts with a smaller and more engaged circle?) is fine as an initial thought sorting mechanism, but over time I think you’ll have an especially insightful perspective not just on consequential events, but on major human themes. I hope you’re able to articulate and share those thoughts.
And taking some time to do so will almost certainly result in a more serious and engaged (albeit smaller) audience.
I don’t see most of your characterization of Scot’s post in his post. Ok, you have your point of view, but I’m not convinced by your effort to stick Scott with it.
It's kind of a lousy orgy if it's one straight girl with nine straight guys.
I don't think it was even as exciting as meth-fuelled orgies, more that at least one person (Caroline Ellison tweeted about how dumb life was when non-medicated, which indicated she was on Adderall or the likes, that was the amphetamine that got rounded off to meth) was on the usual high-achieving student's study drugs of choice (Ritalin, Adderall, whatever) and that the little gang of pals and colleagues probably all were, or had been at some time.
The orgies bit comes from the perception of poly, which I don't know if it was so; they were generally romantically involved as currently or formerly dating, and they were poly-adjacent. But that they were having drug orgies is, alas for the scandal quotient, probably not true. Or at least not more so than the usual Bay Area circles which are sex-positive, poly-positive, sex work is real work, non-traditional families, gender and sexual orientation is on a spectrum and the whole nine yards.
One of my biggest concerns is that EA is itself playing a status game. You just have to glance through the forums to see the number of people who are more concerned about "optics for the movement" than real problems.
There are many people clearly (including Will, I would argue) using EA to build reputation above and beyond the goal of charitable work. I really don't see how this is so different than other forms of charity. Wordsmithing the rules doesn't change the fundamental status game.
Anonymous philanthropy, doing good without branding oneself a do-gooder, solves this problem. But how do you build a brand/community/movement around doing good things but not taking credit for it?
Secret societies and mystery cults, maybe.
Outwards, everyone thinks you are making sacrifices to Yog-Sothoth in dark rituals. The true secret for the innermost circle only is ... that it is networking event to encourage rich people to buy bednets anonymously?
For starters if you ever put "build a brand/community/movement" before doing the work it's probably going to end up like this. It looks to me that plenty of people in EA are overtly using it to build their own status, and the movement is mostly just OK with that.
How exactly do you expect to "do the work" on a global scale without building a community or movement?
How do you do any work "at global scale"?
I think it's very dangerous to consider building a community or movement to be "the work".
It’s a dangerous, almost unavoidable cycle. A bigger community of people means more people to do good things, but you’re going to attract a lot of people who care much more about the community itself than the good things.
Reasonable speculation, I guess (from your POV), but no, I actually live in one of the “safely” bleeding red areas of Michigan, just a few miles from where some of those White Nationalist militia wannabes got raided for their part in the plot to kidnap and execute Gov. Whitmer. Away from Ann Arbor, the rich people seem to love De Vos’s idea to just stop funding public education. I am interested in how you would help me convince my neighbors with the disabled teenager, that the Republicans in Congress who talk about doing away with Medicaid, Medicare and Social Security don’t mean just for “lazy blacks and immigrant trash”, but for them, too.
>I am interested in how you would help me convince my neighbors with the disabled teenager, that the Republicans in Congress who talk about doing away with Medicaid, Medicare and Social Security don’t mean just for “lazy blacks and immigrant trash”, but for them, too.
Well 1) I don't think it is clear that is what they mean...that is what you assume they mean.
The activism that resulted in the federal education-related disability law - IDEA and section 504 and the precursors - it created such a strong before and after, that if their child did not have to deal with public schools pre-IDEA then they will probably not know how weird it was. See if you can find some circa 1985 videos about special education. Probably almost all the things they access educationally will be absent in the videos and that might be a wakeup call. If I can find any good sources I'll post.
I think some people hate Social Security in part because disabled kids can get benefits for not-always-obvious conditions. And there are clear pipelines for providers who are better at getting the four year olds on Social Security. Some of it seems false to me. Maybe this family is not in that category though. Maybe pointing out that some people blame them, blame SSI in general for quite a few social ills (since SSI is the form of disability income which someone can receive if they have not worked enough quarters but are medically determined disabled, the dollar amount is lower than SSDI but it's still significant), even blame SSI for the struggles of Social Security in general. They have probably relied on Medicaid, Medicare and SSI, and special education for many things. You can develop the discrepancy between their quality of life with and without those things, and then point out that youth were not originally eligible for benefits. For a family in poverty, having a kid who is determined disabled may make the difference between them paying the rent each month and them being homeless, so disability becomes a necessity in some contexts, does that seem backwards, yes, but one can sympathize with the family and realize they need the resource and the child benefits. But some people hate that kids can be eligible, or even that someone who has not worked enough quarters can get money, it seems like a dilution of the original purpose.
That will probably sound mean but it might make sense to them.
“You wanna know what effective altruism means? It means that you steal other people’s money while bragging about saving the world, while taking a big chunk for yourself. That’s what it means.”
At 49:18, from David Sacks on a reasonably popular podcast from some VC folks: https://youtu.be/Id7cNqwqt1I?t=2958
SBF being a large public face of EA means that it’s probably not a great time to embrace effective altruism as a brand, even if you strive for similar principles. Maybe even stronger: I’m not sure if the brand recognition of EA will be worth the negative connotations going forward. Probably worth reassessing in a month or so as the immediate storm passes.
I'm not a fan of EA, but i'm even less of a fan of your definition. At any level of income or wealth, an increase in how much you give each year has to be be viewed as an increase in altruism, whether or not it's Effective with a capital E. The idea that the increase in wealth that you experienced or achieved must have resulted from theft is ridiculous and in fact repugnant.
Take it up with David Sacks (it’s a quote from him, as I noted). I brought it up to demonstrate what I think will be a common response to recent events so I could make my subsequent point.
I agree with you that there are ways other than theft to obtain wealth. I’m also pretty sure that David Sacks (who is incredibly rich) doesn’t think that the only way to gain wealth is theft, but you can listen to the linked podcast to get a better idea of his statement in context if you’d like.
Strongly agree.
If EA ends up being the prototype of something better implemented in the future then that’s a great outcome.
Maybe development economics was the prototype for EA? Maybe colonizing and proselytizing was the prototype for development economics?
There's a really good chance Sacks is right on a whole bunch of levels! Hijacking the direct feedback loop between what you do and how you are rewarded is dangerous.
The "brand" kind of thinking is exactly the problem! It treats the problems as a status problem. If EA was so effective it would just point at the results and keep moving.
My rule of thumb regarding donations from companies: they are used to clean image. If one does not find reason to clean it, it is because this reason is hidden. So, if one is going to accept a donation because of lack of moral concerns, this is an actual red flag.
If a billionaire does really want to be a big donor because of good reasons, they will do it on their own. Their goodness will be clearer to public eyes and they don't have to deal with shareholders' pushback.
I don't know about the specific finances of FTX, but I do know about audits. Auditors work based on norms, and those norms get better at predicting new disasters by learning from previous disasters (such as Enron). The same is true of air travel, one of "the safest ways of travel".
In fact, at the same time that we're discussing this, a tragic air travel accident happened on a Dallas Airshow. Our efforts might be more "effective" by not discussing SBF and instead discussing Airshow Security and their morality.
My guess is that the only way in which alarms would've ringed for FTX investors was by realizing that there was a "back door" where somebody could just steal all the money. Would a financial auditor have looked into that, the programming aspect of the business? I doubt it, unless specific norms were in place to look for just that.
I think that there are two specific disasters here:
- SBF's fraud, and
- SBF harming the EA "brand"?
If what we want is to avoid future fraud in the Crypto community, then the goal of the Crypto community should be to replicate the air travel model for air safety:
- Strong (and voluntary) inter-institutional cooperation, and
- A post-mortem of every single disaster in order to incorporate not regulation but "best practices".
However, if the goal is to avoid harming the EA "brand", then there's a profession for that. It's called "Public Relations".
PR it's also the reason why big companies have rules that prevent them from (publicly) doing business with people suspected of doing illegal activities. ("The wife of the Caesar must not only be pure, but also be free of any suspicios of impurity")
For example, EA institutions could from now on:
a. Copy GiveDirectly's approach and avoid any single donor from representing more than 50% of their income.
b. Perhaps increase or decrease that percentage, depending on the impact in SBF's supported charities.
c. Reject money that comes from Tax Havens.
c.1. FTX was a business based mainly in The Bahamas.
c.2. I don't know what is the quality of the Bahamas standard for financial audits. In fact, I don't even know if they demand financial audits at all... but I know that The Bahamas is sometimes classed as a Tax Haven, and is more likely that we find criminals and frads with money in Tax Haven than outside of them.
c.3. Incentivize their own supported charities to reject dependence on a single donor, and to reject money that comes from Tax Havens.
Perhaps also...
... Campaign against Tax Havens?
- Tax Havens crowd out against money given to tax-deductible charities, and therefore for EA Charities.
- There is an economic benefit to some of the citizens of the Tax Haven countries, but when weighted against the criminal conduct that they enable... are they truly more good than bad?
... Create a certification for NGOs to be considered "EA"?
- Most people know that some causes (Malaria treatments, Deworming...) are well-known EA causes.
- They are causes that attract million of dollars in funding.
Since there is no certification for NGO's to use the name "EA", a fraudster-in-waiting can just:
1. Start a new NGO tomorrow.
2. "Brand" itself as an EA charity
3. See the donations begin to pour-in, and
4. Commit fraud in a new manner that avoids existing regulation
5. Profit
6. Give the news cycle an exciting new story, and the EA community another sleepless night.
In fact, fraud in NGO's happens all the time. One of the reasons why Against Malaria Foundation had trouble implementing their first Give Well charity is that they were too stringent on the anti corruption requirements for governments.
It's in the direct interest of the EA community to minimize the amount of fraudulent NGO's, and to minimize the amount of EA branded fraudulent NGO's.
No they were PLENTY of red flags even from day 1 on FTX. But people get blinded by money.
> My guess is that the only way in which alarms would've ringed for FTX investors was by realizing that there was a "back door" where somebody could just steal all the money. Would a financial auditor have looked into that, the programming aspect of the business? I doubt it, unless specific norms were in place to look for just that.
AIUI there was a very large related-party loan between FTX and Alameda that really should have raised red flags for anyone who knew about it, and this kind of thing has been an issue in trad-fi before so it's not like they wouldn't know it's a potential issue.
There really needed to be some kind of internal governance or guard rails so that that sort of thing just wouldn't be possible. It's easier to avoid committing crimes in desperation if you've already pre-committed yourself to rules in some binding way.
Or maybe that just wasn't possible. Internal controls are always vulnerable to subversion at the top.
Extremely basic and obvious advice that probably no one needs to hear: Please don't have anything invested in crypto that you're not ready to kiss goodbye.
I'm pretty sceptical of the long term value of crypto, but even if you're a lot more bullish, you can't deny it's a highly volatile and risky business.
If you think it's +EV and you have the risk tolerance for it, more power to you. But be actually literally ready to walk away if it all explodes.
Seconded. At the end of the day, the question is what long term needs cryptocurrencies and companies build around them can fill, and how much money there is in it.
Right now, it feels to me that the boom is more from people figuring that there is a bigger fool somewhere that they can sell their tokens to eventually than being convinced that any token is a solid investment in itself.
Then again, I hopelessly naive. I mean, if you have told me in 2010 "people will pay with cryptographic money in darknet marketplaces" I would have said "okay". Then you would have added "and also, cryptographically signed URLs to some web page showing a MS paint image will sell for hundreds of dollars" and I would have been "no way".
So for all I know, the growth of "crypto" is sustainable, and will make us all rich. Personally, I would not bet on it.
Also, FTX is described as a "cryptocurrency exchange" on Wikipedia. Who are the victims then? People who were in the middle of exchanging currencies? People who kept their balances on their server instead of transferring everything to wallets under their own control? People who invested in FTT tokens which crashed? People who invested in the company?
I'm grateful for Scott's internet thing teaching me about deontology and consequentialism. Is there a similar fancy term for the ethical system known as the golden rule? “Do unto others as you would have them do unto you.”
I note that it has a failure mode - what if you enjoy people treating you badly? Then you need to add an epicycle - “Do unto others as most people would have them do unto most people.” Which would be much more difficult, because of typical mind fallacy.
There are two ways to avoid the failure mode -- the Negative Golden Rule, or "Do not do unto others what you would not want done unto you", and the Platinum Rule, or "Do unto others as they would have done unto them." These aren't without failure modes of their own, of course.
> Is there a similar fancy term for the ethical system known as the golden rule?
Yes: it's called "the Kantian imperative".
My current personal version of that principle is "honor the deals you would've made (if ideal versions of everyone had had unlimited time to negotiate)".
You'd mostly expect most people to make similar deals, like "don't take each others' stuff (with narrow but important exceptions)", but this allows that the hypothetical deal I made with Bob and the hypothetical deal you made with Bob might be different, and the deals might not be symmetrical, but they all have to be things Bob would've agreed to.
> note that it has a failure mode - what if you enjoy people treating you badly?
I have a hunch that a lot of people who treat other people badly like to be treated badly themselves.
The ethic of reciprocity.
Strictly speaking deontology and consequentialism are meta-level debates about why you should act certain ways. The ethic of reciprocity is an object level principle. So for example, you could have a deontological or consequentialist justification for the same rule about doing unto others as they do unto you.
If you were going to pick some modern philosophical work on deontological ethics for Scott to engage with, what would you recommend?
What precisely was the scam/fraud in FTX? I'm not clear on the details, and am not sure which parts are just bad financial decisions and poor circumstances, and which are clearly fraud.
I got no angle here, genuinely curious.
Forbes is *very* salty about the whole thing and may perhaps be being a little unfair, but it's a good look at the general impression this débacle has left on people:
https://www.forbes.com/sites/davidjeans/2022/11/12/the-devil-in-nerds-clothes-how-sam-bankman-frieds-cult-of-genius-fooled-everyone/?sh=5c3e14f41d26
"It was 2017 when Bankman-Fried first began dabbling in cryptocurrency trading. With an untamed mop of hair completing his disheveled gamer look, he’d just quit his job as a quant-trader at Jane Street, and saw an opportunity in his new hobby: the price of Bitcoin was valued differently in exchanges across the globe. If he could buy low then sell high in another region of the world, he realized that he could build a trading floor around Bitcoin arbitrage.
He launched Alameda Research with around 15 employees and traders, bringing in colleagues from Jane Street, like Caroline Ellison, and others like Nishad Singh, whom he had met through the Center for Effective Altruism, a group of thinkers and luminaries that vow to donate much of their wealth and with whom Bankman-Fried had become enmeshed with. “When we joined, his goal was to make a billion dollars,” one of the first Alameda employees told Forbes. “Alameda traders really were beholden to what SBF was doing: he was the head trader, they were the foot soldiers.”
From the start, “Sam wanted to take riskier decisions than the others wanted to take,” said another early Alameda employee. Specifically, he pushed back against efforts by some to slow down risky trading efforts, and overlooked the challenges of extracting capital from shady exchanges. “Sam ran the shop, Sam ran everything, we all trusted him, and believed him,” said an early employee of Alameda who worked with Sam and his close circle. “It was a dictatorship, in a good way, a benevolent dictatorship.”
Details are unclear, since this whole story is only about a week old, but what it looks like is FTX lent large amounts of customer assets to an associated-but-distinct trading entity, Alameda Research, against dubious collateral, probably to prop it up after it took losses (but that's a guess on my part). FTX offered leveraged trading, so just lending customer assets to other customers was permitted (at least to some extent, FTX may have gone beyond what was allowed), but lending against bad collateral means the loan might not be paid back in full, potentially bankrupting the exchange and losing some or all of the customer's assets.
Last week information about this leaked, triggering huge withdrawals and large drops in the value of Alameda's collateral, bankrupting both it and FTX.
Generally related-party loans like this draw suspicion because there's a temptation to pretend that the loan is good even if it isn't. Morally (and possibly also legally) this is fraudulent because FTX was claiming to take good care of its customer assets, but actually traded them for some magic beans which disappeared.
SBF was using customer deposits (against FTX’s own customer agreement) to fund a loosely associated trading firm run by his girlfriend
So a regular bank would do a version of this as well, right? The bank wouldn't want assets as risky as crypto coins, and there are regulations as to how risky they can be, how much leverage they have, etc... but I thought 'using customer deposits' is pretty standard? A firm can go insolvent without commiting fraud, right? Is it the 'customer agreement' part that makes it fraudulent?
FTX isn't a bank. Its an exchange. With a bank the deposits are (loosely) property of the bank to do with what they want. The depositors get paid for this through the interest on the accounts (this is a vast vast vast simplification).
With an exchange your deposits are just there to make it easier for the exchange to do the service you are paying it for which is to shuffle your money around between asset classes. The exchange takes a small fee for this.
For example, my savings are at Ally bank which pays me interest on the money deposited. The rate fluctuates based on what Ally can make on that money based on the terms of my agreement with them.
My brokerage is Fidelity, which is NOT a bank. Cash in my fidelity account doesn't earn interest unless its invested in some type of asset (fidelity automatically puts it in a money market account which is very safe but low yielding). But, crucially, fidelity doesn't take my deposits and make loans with it.
Furthermore, the FTX terms of service explicitly stated that FTX could not and would not use deposits to make loans.
The bank generally does not funnel customers' deposits into a company literally at the next desk using that money to make profits for the bank and to pay its debts, and pay the customers in toy currency it invented itself. "See, your ten thousand dollars are safe, here's my 100 Noddy Coins which represent your ten grand (and which you can't cash in or exchange anywhere else but here)".
Wasn't this pretty much exactly the origin of banking? You'd give your money to the House of Fugger or someone, and they'd give you a promissory note that could be redeemed for a similar amount of money at another branch of the house.
"So a regular bank would do a version of this as well, right? The bank wouldn't want assets as risky as crypto coins, and there are regulations as to how risky they can be, how much leverage they have, etc... but I thought 'using customer deposits' is pretty standard? A firm can go insolvent without commiting fraud, right? Is it the 'customer agreement' part that makes it fraudulent?"
The customer agreement part makes it fraudulent.
Also, until very recently banks could not do this because Glass-Steigal (sp?) created a separation between 'retail' banks (take deposits, make loans for cars/houses/etc) and 'investment' banks (borrow money, usually not from retail customers, and place bets on futures, etc.).
The banks are also HIGHLY regulated in terms of how much they can play with their customers' money. FTX ... not so much (especially since it wasn't supposed to be playing with that money at all).
A reasonable 'real bank' analog is probably the MF Global financial scandal from 2011 when customer assets got 'diverted' for MF Global to make bets.
In terms of things that have been proven, I don't know that that part has actually been shown to be illegal/fraudulent.
The withdrawals (as claimed hacks) seems the closest to genuine fraud (I mean, hell, IS genuine fraud/theft) but, again as far as I know, remains an assertion not yet proved.
I'm not exactly trying to play legal games here, more trying to distinguish between "what do we *actually* know" and "it's cool to make fun of the rich guy when he fails".
Came here to say the same thing.
I have zero dog in this fight (don't know the people, don't care about crypto) but from my limited following of the issue, the whole thing was more "wildly optimistic claims that could, perhaps, have panned out, but didn't" than deliberate fraud. This is, of course, the origin story of basically every unicorn – to take a very different sort of example, it's hardly clear that SpaceX or Blue Origin will ever make money, so talking them up is a set of optimistic claims about how the world might evolve, which is, as far as I can tell, essentially the same thing as FTX and similar were/are doing.
....
Now was Elizabeth Holmes the same thing? That *seems* more cut-and-dried as deliberate, definite, faking results, rather than just optimistic stories about what the future of blood tests might look like, but I'm no expert.
Perhaps this was more like an Enron? Where things started off as a reasonable story about the future, then slowly drifted into not exactly fraud but let's move the accounting around to keep the trend looking good this quarter, till eventually we do get (mainly only a few) cases of deliberate fraud. Lots of attempting to hide the reality, not much real fraud.
https://mattlakeman.org/2020/04/27/explaining-blaming-and-being-very-slightly-sympathetic-toward-enron/
I'll leave the judgement of morality and legality to others, I care about the understanding.
My GUESS is that FTX essentially ran down that same path...
Which is, of, course, as a practical takeaway, why Jewish law prescribes a metaphorical fence, a Chumra -- don't try to avoid the implications and reasons for the law by twisting technicalities because at some point you'll be so comfortable with moral (if not technical) fraud that you'll be willing to move on to actual, legally defined, fraud...
> but from my limited following of the issue, the whole thing was more "wildly optimistic claims that could, perhaps, have panned out, but didn't" than deliberate fraud.
nope, he literally built a personal backdoor that allowed him to take 10 billion dollars of customer money without triggering internal alarms and audit.
This is on top of the whole thing being held together by the shittiest accounting practices known to man and overleveraged to hell and back.
Given the well-deserved level of sympathy this stack and its readership has for SBF, I'll try to be as tactful as possible in bringing up another angle that ctrl-f does not yet turn up in the comments. This requires tact not because it impeaches his character (I would argue that it does the opposite) but because it uses tropes ordinarily associated with right-wing conspiracy theories. However, given the amount of updating going on this week it seems fair to at least acquaint ACX with accusations that he was laundering money via Ukranian corruption (for instance FTX running a donations-for-ukraine site that took in $55 mil before before being shut down and deleted -- still on archive.org tho) and was making promises of larger donations to democrat candidates (larger than his $30 mil previously, already the second-largest individual after everyone's favorite bugbear) in amounts not really possible for any sane expectation of how crypto makes money.
How would this be better than simple greed and ordinary fraud? Well, if he was running a racket on behalf of purely political money we have to ask whether or not we care more about campaign finance than we care about winning, and if he got pulled in over his head in old-fashioned dirty money and tried to move some around to keep things going and ended up losing actual people's cash in consequence, that at least demonstrates that he didn't start out looking to defraud them. The analogy is more that someone starting a small investment opportunity also starts working for the mob (after all, they're not violent anymore and are just doing extra-legal finance maneuvers) but ends up losing everything. The running for the bahamas at the end with as much as he can carry? Eh well honestly once the panic sets in none of us are our most altruistic.
I disagree that it is "fair" to "acquaint ACX" with accusations that a casual investigation will reveal are conspicuously unsupported by any evidence. That just boosts the noise while carrying no signal. It only took me ~5 minutes to verify that there is no evidence for the "laundering money via Ukrainian corruption" accusation; I don't know how many other people here wasted the same 5 minutes, but that's all on you. It was your job to do that investigation yourself, and either show the evidence behind the accusation or drop the matter entirely.
Oh, I did. Note your apparent indignation. Consider the mental trap we all have about in-group members. There is no proof of our guy doing anything illegal, our opponent obviously is a colluding with the enemy. Our guy is a pragmatic statesman, the other guy is a dishonest politician. It's one of those irregular verbs, as they say.
My stated purpose of bringing this to your attention is the same as what Scott identified in himself when he wanted to speculate about how perhaps SBF wasn't aware of the fraud -- speculations that soften a bad story about someone we care about and respect.
For my part, I HOPE for his sake he WAS laundering money. It's not a bad thing, just illegal. I'll go to the mat for Robin Hood, too. I'm no expert on the subject, but I suspect that campaign finance laws are like the tax code -- they exist to be circumvented in various creative ways by teams of lawyers and accountants. If someone of SBF's stature was laundering money through Ukraine so as to more effectively support candidates he believed in, I say good on him. And if this went wrong in some way that caused him to end up needing to commit fraud, that to me sounds much more ethical than if he'd just started out planning on scamming the average investor.
Meanwhile feel free to believe that your worldview is composed purely of unimpeachable concrete things. Must be lonely up there.
*plonk*
Heh, yeah after reading the interview I'm including here I retract any positive-sounding things I said about his possible motives. I revise my opinion downward to "guy who is willing to commit fraud and is directly on record boosting something that walks, swims, and quacks like a money-laundering operation at the very least would have laundered money if he could." I'm not going to spend any further brainpower attempting to gild this turd, though.
https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy
There's little doubt but that SBF has committed some sort of fraud on a large scale, though we can argue about why.
The theory that you want to "acquaint us" with, requires that all three of A: SBF and B: the US Democratic Party (or maybe just Joe Biden), and C: the Ukrainian government, have committed fraud. But with no evidence for the second two parts other than "this theory isn't literally incoherent". Indeed, it seems to me like the whole *point* of this arbitrary, contrived, unsupported theory is that there are a lot of people who have a persistent desire to accuse the US Democratic Party and/or the Ukrainian government of fraud, have a hard time being taken seriously outside their own bubbles because of the lack of evidence, and so like to invent hypothetical scenarios linking their actual targets to unambiguously-guilty fraudsters in hopes that the rest of us will buy in to guilt by association.
You missed the part where he boosted a crypto-donations-for-ukraine site which reportedly collected $55 million before being shut down and deleted from the internet?
And again, I'm not even against this behavior. I don't regard finance law as really about ethics (I'm not doing the ends-justify-the-means bit). Honestly I'm just shocked at how offended you seem by the idea of democrats doing irregular but ethical things with money. I'm not claiming anyone was pocketing the money for corrupt purposes. Also shocking is that a famously-corrupt (by world standards) regime like Zelensky's is now so defended and lionized that it's bad to presume they're doing run-of-the-mill shenanigans with funding. I really don't get it.
I suppose if you're convinced I'm some kind of right-winger who wants to 'hurt' your team, the vitriol would be warranted. Anyway it's a moot point -- I originally meant it as an elaboration on SBF's motives that I found exonerating, and I've given up trying to defend him and won't speculate further as to his involvement with Ukranian fundraising.
If there were a single person or organization on the planet I engaged in as much motivating stopping to defend, though, I'd ask myself some hard questions.
Where would I go if I was interested in reading more about this topic? Especially concrete evidence or signs that actual money laundering was going on.
What are the base rates for fraud?
I would suggest it is us higher for businesses involved in crypto or that lobby heavily or that suddenly make lots of money and possibly higher in ones that claim a noble purpose.
But with scientific papers, charities, exams, memoirs etc whenever people investigate them fraud is endemic as Scott has shown lots of times.
Maybe I am being overly cynical but you should probably assume a decent share of your friends, businesses you use, charities you donate to, scientific papers that you respect or have entered the zietgeist, books you read are completely dishonest and fraudulent.
You are much more likely to hear about something that is fraud than isn't fraud. Your local grocery store caries 10s of thousands of products that aren't frauds for example. There are thousands of banks in the US that aren't frauds.
Crypto is great source for frauds because its a new industry, with lots of unsophisticated people willing to throw money into it. In the 90s these same crypto fraudsters would be doing pump and dump schemes and in 2000s doing mortgage scams.
Fraud is likely to be lower for both long standing and low profit firms.
But I doubt the rate is that low I suspect lots of goods in your local store are made by firms defrauding their taxman or creditors.
An interesting question is how much of crypto’s current state is tied to fraud. Is it possible to stamp out this fraud and still have a working crypto eco-system afterwards?
It will be interesting to see how things develop. I personally think the fraudulent behaviors are more central to crypto than it is to most other industries, so eventually crypto will essentially fizzle out and die. I’m not so attached to that though that I’ll be mad if it doesn’t.
If the fraud went away there would be way less attention to crypto and the asset prices would crumble but I think thats more an indictment of how most people don't see crypto as useful than as evidence its only good for fraud.
I agree. I tried to phrase my comment in a way that allowed for that to be true.
Basically, I was saying that things like mortgages still exist even without all the fraud, but crypto may be different because it lacks inherent usefulness.
You mentioned that crypto is good for fraud because it’s a new industry. That’s true, but it’s also the case that unlike other new industries in the past, there may not be much left once the dust settles on the fraud.
I agree as well! Maybe a good analog would be things like psychics or fraudulent medical cures.
Software Engineer looking for work. ~15 years experience, worked at a FAANG company for a while, also have worked at smaller places.
Let me know if you are hiring for anything interesting. Prefer work that benefits the world or offers opportunities to master new skills.
A few months ago, FTX bought a stake in my former employer. At the time I was a little miffed that I wasn't able to participate in the transaction, it's a private company that doesn't pay dividends and I can't easily sell it outside of an arranged transaction like this. Now I'm glad I didn't end up with Ponzi scheme blood money.
It is humbling to know that my ex-coworkers, people who have been in the finance industry for years, people I respected and thought had well-honed bullshit detectors, still fell for FTX. Personally I've never touched cryptocurrency but figured, hey, if "my people" like this Bankman-Fried guy, he's got to be less scammy than the rest of them, right? Ha ha no.
We're all idiots now.
Now? I've only ever seen bozos on this bus
There have been lots of frauds over time, and there will be many more. It turns out that SBF was especially good at branding himself with EA and thus avoiding some scrutiny, but Elizabeth Holmes, Bernie Madoff, Enron, and many others have done similar stuff without any particular philosophical trappings. I suppose some introspection in EA circles is warranted, but most of the self flagellation seems somewhat beside the point. Now if there were a trend of EA types defrauding people, that would be a different story. For now, this seems like a one off. Bernie Madoff is Jewish, and his fraud (bigger than FTX, by the way) didn't lead to Jewish people reevaluating the morality of their religion or ethnicity, nor should it have.
Now crypto is a different story. There IS a trend of crypto people defrauding everyone they come in contact with, and at this point my take is that the whole space is quite rotten. If anyone needs to do some soul searching, it's crypto. Or maybe just a few prison sentences will do.
I completely agree. Kind of a reverse "no true scotsman". Instead of using the group to criticize an individual, EA critics are using the individual to criticize the group.
Like Madoff, the FTX fraud is actually quite bland and ordinary as far as financial frauds go. The difference is the scale just like Madoff and the high profile of the prime suspect, like Madoff too.
I liked Cowen's take on this from this morning:
"I would say if the FTX debacle first leads you to increase your condemnation of EA, utilitarianism, philosophy, crypto, and so on that is a kind of red flag for your thought processes. They probably could stand some improvement, even if your particular conclusion might be correct. As I’ve argued lately, it is easier to carry and amplify damning sympathies when you can channel your negative emotions through the symbolism of a particular individual."
https://marginalrevolution.com/marginalrevolution/2022/11/how-defective-a-thinker-are-you.html
When it comes to crypto, I often think of something Scott wrote a few years ago: "if you try to create a libertarian paradise, you will attract three deeply virtuous people with a strong committment to the principle of universal freedom, plus millions of scoundrels. Declare that you’re going to stop holding witch hunts, and your coalition is certain to include more than its share of witches."
Crypto spaces advertise with "no regulations!" - now guess who ends up there?
I agree. It’s interesting to see how certain personalities are attracted to ventures that are very likely to be corrupt. Like, if you wanted to be corrupt without anyone knowing, you would maybe pick an industry not known for corruption and fly under the radar. But then you wouldn’t have other sociopath friends with whom to share your exploits.
Regarding point 5 (EA people have always condemned doing bad things for good reasons), how different is this from corporate mottos and the like? Every corporation in the world has some sort of statement somewhere saying 'we will always put the customer first' and every corporation in the world also transparently acts in a way at odds with this.
When you read a story about a corporation absolutely shafting a customer and the media print a response from the company spokesperson saying "[Corporation] is deeply committed to the principle of [not doing that]", most people don't treat that as much of a defence of the company - the bad actions speak a lot louder then the pretty words.
In this particular case I think EA is clearly more committed to the principles of Rule Utilitarianism than a random company is committed to 'Enbedding Excellence in Every Widget Sold', but I do question whether they should get any credit for that unless they actually enforce Rule Utilitarianism as a condition of recieving EA money (or whatever - I mean some concrete step that elevates Rule Utilitarianism above other plausible Consequentialist frameworks)
I'm not sure that most companies do act at odds with providing value to the customer. Most of the goods and services I buy do what I want them to. My books, groceries, car, smart phone, clothes, and other possessions are generally great. Sure, I've had some bad customer service experiences and whatnot, which isn't fun, but it's typically not a huge problem.
In situations where I'm not the payer, like health insurance, things get a little more dicey. That can sometimes feel adversarial, and I feel like stories of insurance companies screwing people out of coverage are much more common. I don't mean to paint companies as being saintly.
So yeah, companies do bad things regularly, but in general I think a well-functioning market economy leads to most customers being satisfied most of the time.
"unless they actually enforce Rule Utilitarianism as a condition of recieving EA money"
I don't think anyone has ever given EA money to people doing extremely illegal things (well, if they did, they were smart enough not to tell me about it). Someone could put a page in every grant application where you have to evaluate their rule-utilitarianness, but that just seems like dumb virtue signaling by formalizing what anyone with common sense does already.
I completely agree with your response, but (given your reply) I don't think I quite managed to make the point I was hoping to in my original comment. Unless you especially want to contain FTX to this thread I'll think about it some more and bring it to the next OT
I expect the explanation for that is probably something like "most rationalists refuse to speak in absolutes, and 'vast majority' is actually the maximum quantifier intensity they're willing to use without doing a formal quantitative estimate."
Plus ca change, plus ce la meme chose - “the more things change, the more they stay the same” , (finance+innovation+human brokenness = failure and catastrophe - it’s happened for a long long time - what’s that other saying that innovators mention? “This time it’s different”
Seems like many EAs have decided that they never bought into consequentialism/utilitarianism after all. And sure, if you look through the thousands of pages written, you’ll find some hedges against doing intentional harm for the greater good.
Unsurprising that nobody has the chutzpah to say the quiet part out loud: If SBF succeeds in destroying the credibility of crypto, ushering in regulations that prevent anonymous ownership of huge piles of capital on blockchains, that would have huge implications for AI safety, since it limits one vector through which AI may control significant resources.
Call it a “pivotal act” if you will.
I mentioned this at an EA party as a joke once, but it has become increasingly true, that crypto's greatest contribution to AI-Safety is in reducing the number of intelligent people who would otherwise go into capabilities research
I don't know if you're joking, but in case you're not:
1. FTX was pushing for more US crypto regulations and AFAICT doing a good job. If all they wanted was regulations, they could have kept doing that. Unclear whether the end result of this will be more regulated exchanges, or everyone who would have been on highly-regulated FTX going off somewhere else instead.
2. I'd be surprised if crypto came out of this so devastated that a superintelligence couldn't accumulate a pile of Bitcoins.
3. Probably not worth blowing up so many other charitable opportunities just to accomplish this one thing.
You could call me a concern troll. I don’t buy into AI doomerism; but if I did, I would definitely want to see the price of bitcoin fall, and laws passed that require KYC/AML for all nontrivial transactions.
On a serious note, I know that many EAs really believed in unbridled utilitarianism. SBF surely did. That so many EAs now be like “we never meant it that way” looks pretty questionable.
Edit: Furthermore, many rumors indicate malfeasance going back years!
Highly plausible to me to argue that technical avenues to AI safety have no logical basis (fundamentally impossible to predict what a supercomplex self-modifying system will do in the future), but through policy we may limit directly handing the keys to the economy to AI.
Re point 3. I agree that granting AI doomerism, SBF and co still *probably* didn’t make the right calculations. But how does one fairly evaluate a decision with shortterm costs and longterm benefits while still in the shortterm?
An AI that sits as an asset on a corporate balance sheet and an AI that manages its own balance sheet should arguably be seen as categorically different, requiring different regulatory frameworks.
Sorry for not making myself more clear.
While a business may use an AI — think of the AI as a capital asset owned by the business, like a factory — damages that AI causes become a liability of the business. By contrast, blockchains allow AIs to, anonymously on the internet, manage their own balance sheet: own other assets and take risks which may cause great harms. But because they would be anons, nobody will be liable.
While there has always existed a thriving world of secret finance (dark pools, offshore banking, shell companies, etc.), bringing all of that online, throwing unsupervised AI into the mix, and letting it run wild…would make for quite an interesting and chaotic economic environment. And probably massive systemic risks.
Now that crypto is in the news again (for something bad, like always), I have to say I'm surprised that crypto people still believe in this garbage. Like, how is it that people still say that this is the future of currency? How many more scams does it take to prove to you that crypto has no practical use (other than scamming lol) and is value-less? I mean just look at the whole community, its literally all a Ponzi scheme (see https://ic.unicamp.br/~stolfi/bitcoin/2020-12-31-bitcoin-ponzi.html by Jorge Stolfi)
There's no specific aspect of Crypto that made this possible, its a fairly run of the mill financial fraud that happens to be large in scale and involve high profile people.
Now i am not a fan of crypto anyway, but this doesn't change my opinion much either way.
Well, was John law right or not?
The Mississippi Scheme failed, but both important aspects of it (paper money and the value of the Mississippi basin) were correct; the immediate environment in which they were proposed just wasn't ready for them quite yet. (Or you could be even harsher and say the stupid officials who caused the bubble to end couldn't tell the different between the past of money as gold coins and the future of money as a share of a productive enterprise.)
I agree that crypto has so far done a truly terrible job of explaining what it can do better than any pre-existing mechanism, whether that's contract law or standard dollars. But I am still willing (so far, though my willingness is dropping fast...) to be persuaded that *perhaps* such possibilities exist.
It doesn't help when the anti-crypto people come across as even more clueless than the crypto people – for example if you can't see the poverty in the argument that "crypto is ovious a scam because it is zero sum" then, yeah, I'm not really interested in your opinions about finance.
Now that crypto is in the news again (for something bad, like always), I have to say I'm surprised that crypto people still believe in this garbage. Like, how is it that people still say that this is the future of currency?"
I have a friend who does work in the crypto area. He owns a small company that provides programming services and is, essentially, an arms dealer to the blockchain folks.
HIS answer to this is that the use/need for cryptocurrency is lower in the US, Europe, Japan etc but has a much clearer need in places such as Russia, China, Vietnam.
In addition, folks who are seeing banks and/or VISA/Mastercard block industries they don't like (e.g. gun stores, porn sites) are starting to pay attention.
The scams are bad. But losing your money or getting cut off from the financial system by central authorities is a problem for lots of people -- just not most people in the US/Europe/Japan.
That makes sense, but still in that case crypto is just a band-aid solution for a bigger problem which is the authoritarian regimes.
This kind of feels like someone doubting the utility of medicine, being told that they're useful for treating diseases, and then scoffing that they're just a band-aid solution for the real problem which is the existence of diseases. It's not *false*, but unless you have a good idea for getting rid of disease quick we should probably still have medicines.
Why would the economy of the Bahamas suffer because of this?
I guess FTX was a decent size company but I can't imagine it occupied more than an office building, maybe a few hundred employees. That's small compared to the size of the Bahamas, surely?
The economy of the Bahamas is heavily dependent on two things; tourism, which makes up about 50%, and financial services. This includes off shoring, being a tax haven, and encouraging crypto and e-commerce. There are always allegations that it is used for money laundering and tax dodging, and there have been clampdowns:
https://www.offshore-protection.com/bahamas-tax-havens
Tourism was very badly hit by the Covid pandemic and by Hurricane Dorian, and hasn't recovered to where it was yet. So a big scandal in the financial services sector is going to hurt.
There's less than 400,000 people in the Bahamas. The entire GDP of the islands is about $12 billion. FTX had a billion in revenue and something like $400 million in net income. And SBF and company were probably spending more than that since he was stealing money. Even if you assume he was spending a tenth of net income on the entire execute team and principal owners dividends and salaries that's about .5% of the Bahamas' GDP. Equivalent to losing more than a hundred billion dollars in the US. Not enough to be a full blown recession it's still a serious blow.
On the bright side, SBF has managed to single handedly reduce wealth inequality in the Bahamas!
Indeed. The GINI should go down by a significant amount.
If your market goes above 33%, do you then have to meditate until you achieve ego death, thus severing all ties with yourself?
https://manifold.markets/IsaacKing/will-scott-alexander-be-found-to-ha
No, that's my sign that you're all getting wise to me and I need to abscond with the money.
Bet a lot of money that you will commit fraud, commit fraud, use the benefits of the bet to compensate the people you defrauded, and the remaining money to fund EA campaigns.
That is a self-fulfilling prophecy if I've ever heard one.
If anyone wants to support research on oogenesis and meiosis, I'm looking for funding. I was awarded $300K from FTX regranting but it looks like they won't be able to actually provide it. My current funding will run out in August, and if I don't get more (at least 100K) I'll have to severely cut back and lay off my assistant.
My email is metacelsus at protonmail dot com
The FTX situation ties in well with the fraud triangle, almost too perfectly: rationalization, pressure, and opportunity. It was all there. Donald Cressey discussed the idea of a nonshareable financial problem as part of rationalizing fraud, and I see that as a situation where SBF and crew must have rationalized using customer funds as a way to avoid admitting their own failures. There are no great lessons from this, it has happened before and will happen again.
Many people here seem to be thinking of SBF as a brain in a vat making moral calculations and failing. I see a very young man surrounded by other very young men and women in close but complicated relationships, isolated in a penthouse in a country far away from their parents and other relatives, possibly using various drugs that are sometimes known to interfere with good decision-making, and swamped with so much money that everyone from Bill Clinton on down was kissing their asses.
I defy any of you to make excellent choices in those circumstances. This is not an excuse; it's an explanation. It's not an excuse because you can always remove yourself from those circumstances when you start to feel yourself sliding down the wrong chute. But it is a very solid explanation.
Yeah how did he get into thos cirx? Just transported in, without willing it, shazam?
One or two steps at a time, just like most people get wherever they're going. Again, I'm not excusing his behavior.
Road to hell is paved with good intentions, eh?
This makes a lot of sense. It also reinforces how bizarre and foolish it was for VCs and other institutional investors to give him so much money after so little diligence.
Well let's bear in mind VCs are basically a giant game of craps. They *expect* to lose on 9/10 or even 99/100 bets, and win fabulously on the last one. So it isn't even in their MO to do so much "due diligence" that they talk themselves out of investing in slightly loopy ideas offered up by slightly loopy characters. I mean, if it *were* their MO they would be Bank of America, not a VC firm, and making solid money originating mortgages and business loans to General Mills.
Not to be annoying, but isn't there an argument that if you're gonna be kinda-returning your FTX-related money, it should go to the defrauded investors and not charities? Like, from the moral perspective of FTX, on the margin they should've defrauded less even at the cost of less donations. So maybe your FTX bucks should go to what they should've done.
I guess in practice softening the reputation/trust blow would require more money than anyone has, and helping some good charities is doable. But maybe that's the naive-consequentialism thing and you should do as implied by the weird deontology above.
That's the teeth of this dilemma. I don't care about the sports stadia and teams he poured sponsorship into, but there are good causes that were awarded grants out of what is now known to be stolen money. Bankman-Fried's actions affected a lot more people than just those who invested money with him hoping to make returns on their investment, and he has stolen from a lot more than just the money he defrauded.
Not sure, but I got the money in January 2021, when AFAIK they weren't defrauding investors. If I learn that they were, stronger argument for giving it to the defrauded investors it came from; otherwise, I feel okay about giving it to the people who need it most.
Also, AFAIK right now there is no way to give money to defrauded investors, it's not like they have a fund or anything.
new vox article makes it look like earlier. I guess, they didn't have it in a separate account so it's hard to know when it hit net negative lol
Seems reasonable.
As I understand it the bankruptcy process thing will be trying to give some of the money back (that's the thing with the clawbacks etc. right?), not sure what would happen if someone sent them money but maybe they would take it.
There's a post by a lawyer about this here: https://forum.effectivealtruism.org/posts/o8B9kCkwteSqZg9zc/thoughts-on-legal-concerns-surrounding-the-ftx-situation
I am sorely confused. I thought giving to charity meant something like bags of groceries, money for funeral expenses, paying that medical bill, helping scholarship funding. I know, very provincial thinking.
Now you're saying that giving to charity means funding "medical research, or lobbying, or IA research labs." In other words my charitable giving should be to venture capitalists and lobbyists. Already a pretty wealthy group, one that I could be asking for a charitable donation someday.
Sorry, but I have no sympathy for the comeuppance of this child of uber privilege, nor for those who willingly followed him down the cripto path. And his "victims" - those that spent the money before they had it (in dollars) - well, perhaps my cold heart will someday warm and my sympathy will extend to them.
Bags of groceries and medical bills don't solve anything in the long run, you've just helped some random Joe, but did nothing to prevent their predicament from arising in the future.
This was an acceptable state of "charity" in the premodern era, where problems weren't really solvable and you could only mitigate the damage. We can do better.
The March of Dimes was a charitable organization dedicated to the fight against polio. After that was cured, they moved on to other medical causes. Lots of money is raised for research on breast cancer. My impression is that EA types regard breast cancer as over-funded relative to how many people it kills!
"My impression is that EA types regard breast cancer as over-funded relative to how many people it kills!"
And right now my impression of EA types is "Look who was your poster boy and where you followed him, so I don't think beans of your opinion about conventional charities. So long as the local group doing the Pink Ribbon thing is not secretly funnelling the fundraising to compounds in the Bahamas, I don't care".
Pretty sure that when we "Look who was your poster boy and where you followed him" it was towards funding exactly the sorts of things that EA was already funding. If you have an object-level criticism of how EA sets its priorities and why we should disregard their thoughts on breast cancer research funding, that's certainly appropriate (and plausibly warranted). But, you're going to need to demonstrate why this comment is anything more than uncharitable dunking.
As someone raised Catholic, I'm somewhat sensitive to the fact that people aligned with one's community don't always act in ways that are perfectly ethical. Given your moral commitments, I'm curious to what extent to you believe that malfeasance by someone supporting a nominally charitable organization ought to impugn the stated ethics of that organization.
That might be the case . I don't know what the numbers are -- do you? Probably would save more lives to devote that money to providing people with mosquito nets or something.
" And his 'victims' - those that spent the money before they had it (in dollars) - well, perhaps my cold heart will someday warm and my sympathy will extend to them."
Imagine you are running some organization (maybe doing medical research) and you get a grant to pay for one researcher for five years. You'll get the money at the beginning of each year for the next half-decade.
If the grant provider goes bankrupt after the 1st year, you're not going to have money for the researcher and will have to let him/her/it go.
Are you proposing that folks wait until all five years of grants have been delivered before hiring the researcher? Or what?
>If the grant provider goes bankrupt after the 1st year, you're not going to have money for the researcher and will have to let him/her/it go.
This is exactly my situation, we need about $100K or else we'll have to lay off our research assistant when our current funding runs out in August.
No. Just perhaps some assurance that the money was available. Universities (to my understanding) do this regularly - negotiate the funding and the term, then dole out the salaries to the researchers.
Two weeks ago everyone thought that the money was available. This collapse was VERY fast.
I just want to vent here. I was aware that the week before last Alameda's balance sheet had leaked and had red flags. I knew a lot of people had withdrawn from FTX. I had a hardware wallet ready for long term storage. I had moved funds off Binance last year when a similar scare happened. And despite all that, I didn't withdraw from FTX partly because SBF was EA-aligned, he'd donated so much to EA causes, so there was no way the funds weren't there. I'm learning some tough lessons about my naivete.
I've lost all my crypto. A good chunk of my net worth. But worse than the monetary loss, I feel so betrayed. That loss of trust is devastating.
The smallest of things outrage me. SBF retweeted someone implying that there'll be an airdrop for those who didn't withdraw there funds. After withdrawals stopped, after Binance refused to buy FTX Intl, after everyone knew that client funds must be fucked, that he'd been lying so far, he categorically stated that FTX US was US regulated and totally solvent, and mere hours later FTX US declared bankruptcy. Lying to the bitter end.
All this about St. Petersburg is irrelevent. He could have doubled down forever if he wanted, blown up Alameda to smithereens, and everything would still be dandy if he hadn't touched customer deposits. That was the crime. He touched money that wasn't his to touch. That's it.
Thank you for writing that. I'm not a fan of EA and found myself emotionally unaffected by the FTX collapse. Your writing brought me back to earth and I can empathise with you. I wish you well.
I am very sorry to hear about your financial loss, and your well-justified feeling of betrayal.
I have no money in crypto or in the stock market, and honestly wasn't feeling much sympathy for those who lost in the FTX debacle.
Until I read your post.
The fact that you chose not to grab your assets and run because you believed in SBF, and were appreciative of his EA contributions, indicates you are a person with better values and ethics than most of the over-lauded crypto captains.
I hope you eventually grow your financial wealth back greater than it was before this shameful event, and that you keep and grow your sense of altruism as well. It's certainly worth more than imaginary money.
Thank you for opening my eyes to the harm that this has caused to undeserving people like yourself.
I’m sorry to hear about you losing money. Did you not realize crypto is at best gambling and at worst fraud? All of it. Money in a bank is safe. Money not in a bank is not safe.
Well, people may have been prepared to lose all their money due to crypto assets going to 0, but even if you cashed out to US dollars, you lost all of that too if it was on FTX.
And even if you cashed out to US dollars and sent it to a real bank months ago, it seems there is some chance it gets clawed back during the bankruptcy (90 day window apparently).
Yeah, sympathies. I didn't have money in FTX directly, but I lost a lot in second-order effects. Feels bad :(
One possible take is "reporters for the New Yorker are sometimes right". I'm talking about that article that was sympathetic to EA (and to MacAskill in particular) but showed SBF (to me) as someone the perniciousness of whose influence could be seen from miles off.
That writer in particular (Gideon Lewis-Kraus) seems to be quite good. For example, his article about SSC was generally considered to be balanced and fair by readers of SSC (in contrast to the NY Times article): https://www.newyorker.com/culture/annals-of-inquiry/slate-star-codex-and-silicon-valleys-war-against-the-media
Just read it. Nice article, balanced, thoughtful, hitting on a bunch of the red flags.
can you link to the article you're referring to?
https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
Happy to take criticism of the scandal markets and improve the resolution criteria. Manifold questions can be edited as one goes. Happy to take suggestions of other high profile people.
Seems like the kind of thing that sounds good in theory, but terrible in practice.
Wealthy people can manipulate the markets if you let them bid. I suppose you could have a rule that people are not allowed to bid on themselves, but if you start a market on someone without their consent, you can hardly complain if they ignore your rules.
If the goal is to increase trust in EA having a page where you explicitly talk about how untrustworthy everyone is which is off by orders of magnitude from base rates of fraud is a terrible idea.
I have many concerns, including:
* The psychological effect of being made a market's target (for no reason other than someone thought you should be tracked like this) and then seeing your apparent social credibility being updated in real-time.
* Related to the above, the potential for abuse a la Goodhart's Law. If you're rich you can buy up a lot of "YES they're a fraud", and really hurt someone's reputation among people who are watching that market. Or you can buy a lot of "NO this person is great" on yourself to lower people's suspicions.
* Encouraging people to outsource their feelings about their own personal connections to the market, including by Scott who apparently thinks a threshold of 33% (!) is enough to cut someone off or justify the refusal to do so.
These criticisms boil down to the fact that these markets will never be efficient, and this will create a lot of problems.
My main concern was the short date. In 2019, a market for "would SBF be involved in fraud by 2021" would have been negative. Why not 2030?
Many of your markets seem to be at around 3%. If it's 3%/year, does that mean it's 30%/10 years, or is there a "fraudulent" personality trait that you either have or don't and they're saying there's a 3% chance you have it regardless of time scale? I don't know but I think it would be an interesting question.
I think that market might have hovered a lot higher than we expected, which might have triggered it.
EA has missed an existential risk to itself that (in hindsight) was a fairly obvious problem. Should this affect our confidence that EA can make credible claims about the far- future, such as the risk of AI Catastrophe or the value of future human utility?
I'm not sure myself because I've never felt longtermist EA has paid enough attention to structural uncertainty analysis. But I'd be very interested in other perspectives
Oh, this is a good point. If they missed the beam in their own eyes, how can we trust them about the speck in others?
I agree. There are way too many variables for any group to accurately predict the utility of long term actions, even facially and unimpeachably good ones. The law of unintended consequences prevails and we know where the road of good intentions can lead. Think long term but act short term.
I don't really see why. Nobody in EA claimed to be good at evaluating the financial health of companies.
Suppose that a climate change mitigation charity took FTX money and got burned. Would you ask "How can these people claim to predict the future risk involved in global climate if they can't even predict the riskiness of FTX?" To me this would be a non-sequitur - they might be perfectly good climatologists and just not experts in finance and regulation. I think of EA the same way.
That's what I initially thought too, but I think the issue gets thornier the more you worry at it. For example:
1) EA community development is an explicit EA cause area. So I think it is fair to say that 'risks to EA itself' is something EA claims expertise over, in a way climatologists don't
2) Within 'core' longtermist priorities such as AI risk there are quite strong statements made about finance and regulation (e.g. predictions of how much AI will improve the productive capacity of the world). I don't think you can have it both ways; either EA is expert on finance in which case it should have identified FTX as a governance risk to itself (which should lower our confidence in their predictions) or it is not, in which case it should not be making statements about what finance will look like under an AI future (in which case we should lower our confidence in their predictions)
3) In general we actually might think less of climatogists if they all fell for a Nigerian email scam or something. However climatologists also have an incredibly impressive predictive record eg telling us when it will rain. Longtermist EA doesn't have a predictive record because they are predicting one-off events in the far future. Therefore we should Bayesian update harder against EA than climatologists, because we don't have a strong a prior on 'EA is stunningly predictively accurate regarding risk assessment'
This is a nice interview with Marc Cohodes about FTX on the beginning of September.
https://app.hedgeye.com/insights/122943-marc-cohodes-ftx-is-dirty-rotten-to-the-core-hedgeye-investing-s?with_category=17-insights
In re the ftx debacle, can one do immoral acts and yet still be ethical? Is there a rigid linkage between the two?
Wiser people than us have said things like "Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother's eye" and "Judge not lest ye be judged".
The standard of "if you have ever done an 'immoral thing' ever" is impossible to live up to, and the insistence on enforcing it (or more precisely using it as a weapon) is what drives "cancellation".
Are you sure today that something you consider normal will not be considered taboo, or at least something you try to hide, in 2050? Owning a pet? Eating meat? Having an abortion? Driving a gasoline car?
I'm not sure exactly what you are arguing here. Can you clarify? Cancel culture is not being approved of, nor is the idea that anyone has lived perfect lives, nor the proposition that only the perfect can presume to judge-an impossibly high standard. Future ethics or morals are not at issue, so I'm unsure what your argument is... I only ask is if immoral acts were committed, could it not be the case that the actors were still ethical beings? Sorry I did not make that more clear.
Cancel culture is FREQUENTLY about retrofitting the instantaneous ethics of today to the past.
I mean, FFS, there are multiple movies and TV episodes that are no longer broadcast because they involve blackface.
And I'm not talking Amos n Andy, I'm talking things like It's Always Sunny in Philadelphia with episodes from Season 9 (2013) and Season 14 (FFS, 2019!) literally cancelled from replay for MAKING FUN, IN A DELIBERATELY KNOWING WAY, of blackface.
We've moved from "blackface is bad when it makes fun of african-americans" to "blackface is unacceptable under *any conditions whatever*, no matter what the motivation". And damn right this is all retro-active.
We have become a society incapable of two-dimensional thought. If an artist is discovered to have held the "wrong views" that artist's entire corpus of work is deemed unacceptable. We are incapable of distinguishing between different dimensions of a person's life, eg between their personal beliefs (or at least our projections of what we think those beliefs were) and any other aspect of their life – their political career, artistic achievements, business creations, etc.
"can one do immoral acts and yet still be ethical?"
The current judgement of society is: "No. if you are discovered to have committed one of a set of particular offences [to be retrofitted as we wish] you are cancelled".
You can read the question in other ways; I'm given you one answer to one way of reading it, a reading that's relevant in this context (how "society" feels about someone who committed an immoral act).
crypto isn't yet one of the deadly sins, it's not yet an -ism. But maybe it will be in 10 years?
I think you're conflating two different things in your remarks about blackface.
It's perfectly possible to think that (1) so-and-so, who did a TV show in blackface in 1965, did not have evil intentions and was no worse morally than everyone else around, but also that (2) it would be bad to re-run that show _now_ given present-day attitudes to blackface. Or that (1) so-and-so, who made fun of blackface in a particular way, did not have evil intentions etc., but also that (2) it would be bad to re-run his show because some viewers would be upset by it.
In other words, declining to re-run some past bit of TV is not necessarily a moral judgement about the people who made it. "Cancellation" in the sense of "not showing their stuff on TV any more" is not the same thing as moral condemnation of the person.
(Note: I don't mean that it never turns into that, of course. But you can't get from "X is no longer broadcast" to "the people who made X are being judged immoral".)
The technique of attacking one minor part of an argument, and hoping that that counts as dealing with the larger point may work well against many internet opponents, but to me it mainly signals that that an author incapable of engaging with the primary argument.
Even the point you raise is so mired in self-selected concepts that it doesn't prove anything like what you think it proves. We don't broadcast things because "some viewers might be upset by them"?
And who decides which viewers get protected and which do not? I have a long list of things I'd prefer appeared less often, to never, in movie and TV content, but no-one's asking my opinion...
Likewise for the flip-side. What if I am a completionist, who just wants to watch every episode? What if I want to watch Louis, which was widely praised back in the day, just to see if it's as good as was claimed?
(And, stop damn pretending! Individuals who appeared in blackface have been roundly condemned. Do you seriously think Justin Trudeau is ACTUALLY a racist? So why pretend this?
Likewise various people in countries with completely different racial histories to the US who have been piled on by US obsessives. Zwarte Piet is on his way out in the Netherlands, for better or worse, but does anyway seriously NOT expect that come 2030 some part of politics in the Netherlands will involved dredging up videos of someone in blackface in 2018 and making claimas based on that?)
I tried to lay out my full argument, not just blackface but the whole point that much of the current US deliberately and maliciously refuses to accept a distinction between an individual's morality (if that morality involves one of the 3rd-rail sins) and that individual's other accomplishments; AND that is morality is capricious and constantly changing; you chose to ignore it.
I think we're done.
Huh?
I wasn't claiming that anything I said "counts as dealing with the larger point". You said something about blackface, I thought it was wrong, I said so.
(I reject the implicit claim that claims other than one's "larger point" should be immune to criticism. They absolutely should not.)
And I didn't pretend that no one gets condemned for appearing in blackface. I didn't even pretend that no one gets condemned for appearing in blackface in the past when norms were different. What do you think "I don't mean that it never turns into that, of course" meant?
I am in agreement with you in some of your observations But I'm pretty elderly and have never really grasped blockchains, Bitcoin, mining and crypto currency. It all seems very complex and somehow fragile and floating on nothing but future expectations. But it made people nominally wealthy, so it improved some lives, and some apparently used it to try to improve the lot of others, so I can't cast the whole enterprise and it's actors into some kind of outer darkness! It's said that FDR saved capitalism by regulating it and perhaps the crypto currency industry needs regulation as well?
The "nobody could have seen the red flags" argument doesn't really stick for me, because as of now and the foreseeable future, everything involving crypto is one big red flag.
I don't have any financial background at all, I'm just a tech guy following the news. I grant that Bitcoin and a few other early blockchain technologies may have started out with only the best and most visionary of intentions by their creators - even if these ultra-libertarian are themselves questionable, they do have merit and are worth considering. However, I find it undisputable that the honeymoon is over, and whatever castle the visionaries intended to build has long since sunk into a bog of outright crime and games of finding the bigger fool, with basically no redeeming qualities. Blockchain technology itself is either ecologically harmful (Proof of Work), or has a self-defeating design (Proof of Stake).
The heuristic would have to be: If you have been promised a grant by a crypto company but don't have the money yet in cold, hard cash (nowadays meaning a USD/EUR/etc. figure in a computer), you're at substantial risk of losing it and at least hurting your project. If you do have it in hand, you have almost certainly accepted the proceeds of crime, barely legal deception, or a greed game with substantially negative externalities, and all of it with a fairly short logical chain between yourself and those unacceptable events. In either case, according to your point 5, the result would have to be to refuse the money.
I sympathize: being offered an essentially blank check to work on something that is truly important must be irresistibly tempting. I'll never have to make such a decision myself most likely, and I'm glad for it. And yet, I don't have to be a financial genius to see that any company built on crypto money is built on sand, or is part of the scam itself. At this moment, when cryptocurrency companies have not yet had their ultra-libertarian abuses excised by regulation and turned into boring old financial institutions, it really does not matter whether you take money from FTX or any other player. You would be gambling your project or its soul on crypto just as much as anyone else.
There is nothing inherent in this fraud that requires Crypto. Madoff did something similar with cash at a regulated financial firm.
Yeh I agree. Neither crypto nor the technology behind it are anything else but snake oil.
This is just blatantly false. Blockchain technology is used in all kinds of non currency ways that are extremely useful. It may be true that a Blockchain based currency will never be successful, but the idea that the Blockchain concept itself is a useless scam (aka "snake oil") is completely ignorant of the real world facts.
> Blockchain technology is used in all kinds of non currency ways that are extremely useful.
Name one (that can't be done with SQL)
Please provide an example where Blockchain is used in a way that is critical to the project and produces value without a superior non Blockchain equivalent.
https://changelly.com/blog/walmart-blockchain/
-edit- this was with <5 minutes of googling. There are a lot more articles, many of which I'm sure are hypothetical fluff pieces, but I'm sure this is not the only case of an actual currently-in-use example.
Blockchain is not critical to this project. This is Walmart providing an API (written by IBM) and requiring that its suppliers use that API.
"Something like this wouldn’t have been possible without using a blockchain-based tracking system. No other technology is capable of guaranteeing the full immutability of all data while still maintaining full transparency and ease of traceability." <- While this claim is true to a point, given that the blockchain in question is run by IBM, they could develop a solution using Kafka (I'm biased by my love of Kafka Oriented Architecture, normal people would recommend a normal SQL database but that's less fun) and standard authentication in half the time (although IBM would probably have to charge less money for this service).
I'm skeptical that there more than a handful of technologies on the planet that are _literally_ the only way to accomplish whatever it is they are being used for. Walmart clearly was convinced that this had enough advantages to be worth at least trying.
There is always more than one way to skin a cat. I'm comfortable that I have done more than enough to disprove the original claim that "the technology behind it are anything else but snake oil. "
I'm not into crypto. I don't (and have never) owned any crypto currency, I don't spend any time researching it, and I basically only know what I hear about on reddit and have always been skeptical of it's value-proposition as a currency, but I'm sure that actual crypto enthusiasts could come up with dozens of non-currency applications. The fact is that a publicly (for variously wide/narrow definitions of "public" as the Walmart example shows; you and I can't go verify their ledger) reproducible record is a feature that, as far as I'm aware, nothing else has. That is not a feature that is useful in all situations, or probably even in very many situations, but it is novel and I don't buy for a second that it's a feature that isn't useful _anywhere_
I think I still don't believe this - I would treat an offer of money from Coinbase about the same way I would treat it from Meta or Google. There really are gradations; the FTX thing gives me some evidence that they're not as strong as I thought, but not enough to totally overwhelm my prior understanding.
I think I'm going to have to go with The Onion on this one: https://www.theonion.com/man-who-lost-everything-in-crypto-just-wishes-several-t-1848764551
Or, as someone here put it, "there is nothing particularly valuable about crypto except as an unregulated sub-economy for people worried about badly-intentioned-authoritarian or well-intentioned-regulatory interference in the regular economy". And "people worried about well-intentioned regulatory interference", includes both criminals worried that the regulators will catch wind of their criming, and people doing legal but insanely risky things that sensible well-intentioned regulators would stop. If a nonstandard medium of exchange is particularly useful to those groups, then those groups are going to be vastly overrepresented among users of that medium.
So, someone promising to fund your charity with their crypto fortune, should be viewed in about the same way as someone promising to fund your charity with briefcases full of hundred-dollar bills. Maybe they're legit, but the odds just increased substantially that you are dealing with crooks, crazy risk-takers, or people who are OK with doing business with crooks and crazy people.
Aside from the ethical question of whether it's OK for a charity to take money that they suspect might have been but don't know for sure was illicitly obtained, from a practical perspective you'd want to A: not make any firm plans around spending that money until you have it in hand and B: not count the money as being "in hand" until you've got it in your own dollar-denominated bank account or at very least an offline hardware wallet.
The Onion is really going to town on this:
https://www.theonion.com/crypto-confidence-soars-after-ceo-defrauds-customers-ju-1849773647
"Once I heard that the CEO of the major crypto exchange company helped the FTC write the regulations for his own industry, I thought, ‘Well now, that’s a bank, straight up!’ "
Scott is the source of the quote about crypto's value in sub-economies: https://astralcodexten.substack.com/p/open-thread-250/comment/10448804
Also Scott: " if you try to create a libertarian paradise, you will attract three deeply virtuous people with a strong committment to the principle of universal freedom, plus millions of scoundrels. Declare that you’re going to stop holding witch hunts, and your coalition is certain to include more than its share of witches." https://slatestarcodex.com/2015/07/22/freedom-on-the-centralized-web/
The core of my argument is that it doesn't matter whether it's FTX, Coinbase, or Alphabet/Meta. If you know you're getting crypto money (or fiat directly converted from crypto) in any shape from anyone, the source of that money was most most likely crime or unethical* gambling. Do you also not believe that?
* Unethical compared to e.g. the stock market because crypto has no meaningful economic function, or compared to casinos because of their negative externalities (mining).
I'm using Google and Meta as examples of non-crypto companies.
And yes, almost sure the majority of existing crypto doesn't come from crime or illegal gambling, unless you mean that by definition all crypto investment is gambling.
Also, disagree about crypto having no economic function. I have been trying to help members of the ACX community stuck in Russia, and I was able to send crypto to two people that helped them escape arrest or conscription. I'm really proud of this and it looks like a lot of crypto is being used for remittances or stuff. The media only talks about monkey gifs because that's the kind of people they are. I won't claim the good uses are an outright majority but I still think crypto is +EV for the world.
As a person currently stuck in Russia and evading conscription I totally endorse such use of cryptocurrency.
That said, I'm worried that this mechanism is just as successfully used to nullify the effectiveness of economic sanctions against Russian elites and financing Russian efforts in the war against Ukraine which seems to be a much bigger deal.
The existence of cryptocurrency - an alternative financial system evading the coordination mechanisms of conventional financial system is net positive only if the coordination effort of conventional system are net negative. Do you think this is indeed the case?
There can be good uses to crypto, but if you're talking about taking donations from a crypto company, those donations are probably coming from the more profitable side of the industry, which disproportionately consists of scams and Ponzi schemes.
My guess would have been that the average FTX customer was "Superbowl viewer who bought some Bitcoin hoping it would go up", not "savvy scammer". If it was mostly savvy scammers, I wouldn't care about them losing all their money.
Hm, something seems lost in translation here. You're not getting the money from FTX customers, you're getting it from FTX. And the most profitable part of FTX is not the transaction fees, but the complicated options trading that borders on scams and Ponzi schemes at best and requires them to offshore.
By contrast, by far the most profitable part of Alphabet is serving ads on Google searches, which is providing actual value in connecting buyers and sellers.
Aren't you the author who explained here first in ways that made sense that the basic model is an attempt at a virtuous-because-obcious pyramid scheme? You two might be disagreeing about the moral tone of that portion if so.
One final question if you would indulge me:
Assume you work on an EA project that would benefit from additional funding. You are approached by a potential donor who wants to give back to their community by financing your project. The donor is known to be 100% reliable with his pledges. The donor is also a Mafia don, whose source of wealth you know to consist of both legal/ethical and illegal/unethical business, both making an unknown but significant fraction of the total.
Would you accept the donation? Why or why not? What other questions would you want answered to help you decide?
Edit: I guess what I'm asking is, how many degrees of separation are enough to satisfy your argument #5?
I can think of two reasons not to accept:
1. It's possible that him transferring the money to your charity serves or whitewashes his criminal acts - he can tell people "Look, I donated to charity, I'm a good guy, you should support the mafia".
2. It's possible that he only committed his crimes in order to give to charity, so that in some weird logical counterfactual sense you could prevent the crimes by not accepting the money.
I think with SBF, 2 is very likely true; for most Mafia dons, I would expect it to be false. If the mafia don was willing to donate anonymously, I would probably agree it was ethical, although still possibly refuse out of PR considerations depending on what those were. If he wanted me to name it after him or something, I would probably directionally think it was unethical, although in some situations I might still accept, like if he wasn't really *that* bad a guy, the whitewashing ability was pretty low, the PR risk was low, and the need was great.
I admit this is less principled than always refusing would be, but it's my honest answer.
> I think with SBF, 2 is very likely true
That seems overconfiden / premature at this point. Or at least there are many alternative explanations; e.g. Yudkowsky's speculation on the EA forum:
> I'm actually a bit skeptical that this *will* have been done in the name of Good, in the end. It didn't actually work out for the Good, and I really think a lot of us would have called that in advance.
> My current guess is more that it'll turn out Alameda made a Big Mistake in mid-2022. And instead of letting Alameda go under and keeping the valuable FTX exchange as a source of philanthropic money, there was an impulse to pride, to not lose the appearance of invincibility, to not be so embarrassed and humbled, to not swallow the loss, to proceed within the less painful illusion of normality and hope the reckoning could be put off forever. It's a thing that happens a whole lot in finance! And not with utilitarians either! So we don't need to attribute causality here to a cold utilitarian calculation that it was better to gamble literally everything, because to keep growing Alameda+FTX was so much more valuable to Earth than keeping FTX at a slower growth rate. It seems to me to appear on the surface of things, if the best-current-guess stories I'm hearing are true, that FTX blew up in a way classically associated with financial orgs being emotional and selfish, *not* with doing first-order naive utilitarian calculations.
> And if that's what was going on there, even if somebody at some point claimed that the coverup was being done in the name of Good, I don't really think it's all that much of Good's fault there - Good would really not have told you that would turn out well for Good in even *naive first-order thinking*, if Pride was not making the real decision there.
There’s definitely a different perspective you get coming at cryptocurrency and related financial instruments if your primary exposure to it as a business is ransomware. Even some people who made large amounts of money with early BTC purchases developed a lot of skepticism when it evolved from a drug-purchasing currency to a ransomware currency.
I know it’s an unfair assessment (criminals use cash too! And there’s no public ledger for cash!), but it was still jarring to see.
It's probably sane to accept money someone in crypto wants to give you (it can't more than fail), but to otherwise not even touch the stuff, and also not to depend on any money you have not yet actually received *and* converted into the kind of money that you can in fact count on working.
Essentially, it's not real until converted.
Seems like a good idea but might not be enough. Apparently it still might not be "real" if it eventually it gets reclaimed as part of the bankruptcy process. In which case it ends up sort of like a loan?
make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively be emotionally dead to anybody who goes above a certain threshold."
Seems exploitable
"How much money am I willing spend to make Scott never speak to his uncle again?"
Hi,
I am in a red state/rural area in the US that has gotten flooded with more people. It's tempting to say this has happened post-pandemic, but census data from early 2020 shows this isn't true.
I believe the area has been steadily growing since the early 2000's, with a slight slowdown during the worst of the 2008-2012 recession years, but I think it intensified shortly before the pandemic, and absolutely exploded during the pandemic.
The result is that my little hometown, which has had problems with infrastructure for as long as I remember, is really struggling to keep up with the growth. We have a set of problems that have been building for a long time: not enough police, not enough DMV employees, roads built for much less traffic, and the big one: not enough housing.
The area isn't actually super conservative. There's a handful of locals that like to remind others that this has traditionally been an area that votes conservative federally, but elects politicians from both sides to state and local offices. The area has also been pro-union, etc. in the past. On the other side of things, there's a loud element on social media saying things like "Go home, Californians", (This sentiment has been around for a while, but seems louder to me.) Online, in places such as local facebook groups, there's anger surrounding wealthier people moving into the area and pricing out people who have lived here forever. This results in people being upset whenever higher-priced, non-affordable housing is built. There is a slightly different group of people upset when cheaper apartments are built in neighborhoods where it might cause them inconvenience. Sometimes I just want to comment in caps lock "DO YOU WANT HOUSING OR NOT?" but I refrain.
This political debate is happening in the face of the weird economic stuff happening in the US recently: companies having difficulty finding service-sector employees and people having difficulty finding rentals and buying houses are the big ones. Homelessness is rising, too.
The difficulty finding service-sector employees is explainable enough, since many of the people that moved here are upper middle class people who work remotely, and some service-sector employees are getting driven out by lack of rentals. The weird part is that property taxes are going up due to rising home values, and there are lots of new homeowners paying property taxes, and yet somehow, this money isn't enough to support the local government. A few of the schools are filled to the brim and don't have enough physical space to put students, there are roads that are way overdue for expansion and maintenance, and yet somehow, in spite of all the new money sources, it's just not enough. (I know how this sounds. I have reasonably high trust that my local and county governments are not corrupt, but I haven't dug into this.)
People have been "Voting with their feet", and the lenient COVID policies, general good management, and high livability of my area has gotten voted for, loud and clear. And yet the good management and high livability are shrinking because of the influx of people, and there doesn't seem to be any easy solution. Some of the problems don't even seem to have clear causes, such as the "not enough property tax to support the county" issue.
I am wondering if any ACX readers have any thoughts on this, or have seen their hometown go through something similar. I know mine is far from the only town experiencing a culture shift, housing issues, and infrastructure problems post pandemic, but I am wondering: Is there a good way to handle it? I lean conservative, as does my town. Everybody is in horror of the area "Turning into California" (i.e. high taxes, high prices, high regulation, high homelessness. I realize this is a generalization and more reflective of one or two areas in CA rather than the whole state.) Are there good solutions to this that don't involve raising taxes, prices spiking even higher, or fun new laws and regulation, such as restrictions on vacation rentals?
A previous place i lived was a very desirable place to live. But was very against building dense housing. Therefore the housing prices increased (and nearly doubled in some parts during 2020/2021) over the past 10 years.
A nearby town (in a much more conservative area) had about 800 residents 20 years ago. Its not 10,000 because this town was willing to building housing and people were looking for place to live. Luckily I think they have been able keep up with the infrastructure.
So my take away for you is that your towns problems are due to some other place not building housing and its unlikely that your town can fix its own problems. Unless something drastically changed in your town (like a major new employer opening or a great public amenity) people probably didn't view it as their first choice to live. But they couldn't afford their first choice so they went to your city where they could.
Also, does your town control its own budget or is it part of a county? different states handle this differently but it could exacerbate the problem depending on the competency of those in charge, the budget size, access to debt, etc. Infrastructure changes take a long time. Its much faster to build 1000 new houses than to build new schools, fire stations, roads, etc.
On top of this, a city is very unlikely to build excess capacity for things like schools even when they should. In my hometown, they tore down my elementary school to build a new one. The new one opened and they already needed portable classrooms to add capacity. They have since expanded the school twice. And this is one of the richest areas in the country with very high taxes. Money was definitely not the issue.
Gee, it's almost like Paul R. Ehrlich might have been on to something...
I have nothing more to say. There is no subject on god's green earth that makes people stupider than this one, but, yeah, behavior has consequences. WTF did people THINK their kids were going to live? On Mars?
Considering humanity still exists, Paul Erlich was most assuredly NOT on to something. There aren't many "experts" who cling to credibility after being more wrong than "the internet is no more transformative than the fax machine," but he's one of them.
Any correct prediction Erlich made is probably coincidence, given that his predictions about the consequences of population growth ranged from "directionally correct" to "diametrically opposed to reality".
Paul R. Ehrlich really wasn't onto much, as it turns out. The world already produces enough food for there to be zero famine, too much of it just gets doesn't get where it's needed. Repeated studies show that 40% of food in the U.S. ends up in the trash at some point in the farm-to-fork chain. Close to half the countries in the world have birth rates that have fallen below the level of replacement. China's population will peak soon and decline.
Yes there are a handful of countries that still have rampant, unsustainable population growth, but the idea that "there's just way too many humans for the planet to ever support" is been shown repeatedly to be untrue. People can't all live in the the same few most desirable places, but Earth is still vast and humans get increasingly efficient.
The number one agricultural exporter in the EU is big Spain. The number two? The tiny Netherlands, because they grow so efficiently in greenhouses, vertical farming, etc.
Humans on Earth face some pressing problems -- but long term, overpopulation is not one of them. Birth rates fall naturally as women achieve higher rates of education, and except for a few outliers like Afghanistan, women and girls are getting better educated the world over.
ALWAYS we get this. That food isn't running out. Like that's the important thing.
What's running out is "footprint required to sustain a certain level of lifestyle".
Greenhouse gases -- much less of a problem with fewer people.
Species destruction -- inevitable if we insist on using all land everywhere for human purposes.
Look at the post I was replying to – a claim that we need to have denser development because it's "more efficient" even though most people don't want it. We wouldn't need that "efficiency" if we stopped creating new people each of whom moves on (in time) to need their own housing etc.
But I'm not interested in arguing about this. I have spent my entire life talking to people who insist that saving that saving wildlife or dealing with greenhouses gases are vitally important problems but who ALSO insist that they have nothing to do with population.
You lot can have your stupidity and the inevitable consequences; I have better things to do that talk to a brick wall.
Lots of people DO want to live in densely populated areas. Rents are high in NYC, because rules prevent it from getting EVEN MORE dense.
What rules? I'm not aware of any zoning rules in NYC that say you can't build residential skyscrapers anymore. And apparently, neither are the builders:
https://www.citysignal.com/5-new-skyscrapers-that-are-radically-changing-nycs-skyline/
Large swathes of New York are covered by "historic preservation". And, not specific to NYC, but the feds have encouraged neighborhood-level "participation" in development decisions that foster NIMBYism in cities all over the country.
https://twitter.com/robinhanson/status/1490723865978294273
Yes, a stable genius like yourself must have better things to do than call anyone who disagrees with you stupid.
Why bother with an open comment thread when contrary comments leave you sputtering with impotent indignation?
Sounds like you've spent your life working hard on solutions to help others. Or maybe just being disagreeable and argumentative. Reminds me of a grumpy old
dude I know that chalks up his lack of a partner, children or a big loving circle of family and friends as "not wanting to contribute to overpopulation." Instead of "People don't seem to enjoy my sour company much."
> the entire population of the Earth could fit comfortably into a mid-size American state
Tokyo population density= 6158/km2
8b at that density gives you 1.3 million km2
That's twice the size of Texas, or approximately the size of Germany, France, BeNeLux, the Alpine countries and Italy combined, which is remarkable really.
So quite a bit larger than the average US state! Sure you can go denser than Tokyo (at the density of Hong Kong island itself, they'd nearly all fit in California), but anything denser than a famously sprawling yet dense city seems untenable.
I honestly believe that somehow we forgot how to build infrastructure in a reasonably inexpensive way. It’s worse in some areas but overall large projects have run into some disease that makes them cost prohibitive. Some combination of too many veto steps most involving courts, environmental rules, nimbyism, etc etc ad infinitum.
It shouldn't be that way in my conservative, fairly low regulation area, but I think it is, to some extent. We do have some pretty significant nimbyism, as well as mild-moderate environmental rules. I don't know if it has gotten significantly more expensive in recent years, or if something about cashflow has just gotten harder.
Hypothetically, it's not that we exactly forgot how to build affordable infrastructure, we forgot how to keep from taking a good bit of what could be taken.
Spread-out infrastructure is very expensive in the long run. Strong Towns has done some work on measuring this and not just bikes has some videos on their work (e.g. https://www.youtube.com/watch?v=XfQUOHlAocY, https://www.youtube.com/watch?v=7Nw6qyyrTeI, https://www.youtube.com/watch?v=VVUeqxXwCA0). The only solution, unless the town is very wealthy, is to allow dense development everywhere. Get rid of single-family only zoning and parking minimums, put in some bike lanes, restrict cars downtown, maybe build a tram if you have enough people. And the only way to keep the price of housing down is to build more housing.
I've lived in a relatively dense, well-off suburb for a decade. We have plenty of townhomes and apartments, and we have bike lanes which nobody uses, and if there's a parking minimum, it's already ridiculously low for the number of cars.
The biggest single problem with increasing density and getting rid of cars for most Americans that don't already live in an inner city is probably the role of the supermarket in American life. We've come to expect the selection of food that comes from a large supermarket, and the modern supermarket is impractical without a car.
I live close enough to walk to the supermarket, and I do walk there on occasion, but only when I need one or two items that I'm missing or otherwise have to have right now. Otherwise, I make a trip once a week via car, and regularly have trouble getting my week's worth of groceries in from the car in a single trip. I don't drink soda or beer, but having to carry a six-pack on top of what I already have would make it impossible. And I'm single, and frequently eat at the office, so my grocery bill is fairly short. The issue of weather has already been brought up, but there's an additional layer of challenges in dealing with a family besides the massively larger amount of groceries required, such as laws restricting your ability to leave kids alone to run to the store.
(If the biggest driver of suburban living in American life is not the role of the supermarket, it's almost certainly making educational quality more of a determinant in where to live than where you work.)
I don't think anyone should feel guilty about making a weekly grocery run, if that's all you do. People get excessively binary about this. A community with one car per house that mostly stays parked would be doing much better than average, for the US. That's like a retirement community where people hardly drive at all.
It's not very practical largely due to commuting and other trips that people want to take.
[Caution, the following is somewhat fresh off the train of thought, and thus half-finished]
I think it ultimately boils down to people and the goods they consume need to travel more.
I have relatives in small east coast small cities and large towns that predate the car and were built for mining or industry. The city/town had a factory or mine where half the people worked. The rest worked in town doing the other necessary jobs. It was easy to determine where to live: within walking distance of the factory/mine, and probably closer to the church of your particular denomination. This scales somewhat as trains / cars are introduced. If the factory was big enough that it required more people than the town could manage, a rail / road network appeared with the factory at its hub.
The push away from industry after WW2 changed things, and the many changes to labor since have further pushed the transit systems. Instead of having a lot of big factories, you have a lot more smaller industries and commercial offices. These are spread out a lot more, and the hub and spoke system doesn't work as well. The following smaller component changes are the ones I've thought of:
1. As population density of the city increases, the land at the center (the hub of the transit network) goes up in value faster than the land on the periphery, making it cheaper to start a new business at the periphery.
2. Logistics has changed. Intermodal transport means that more cargo is carried on trucks, at least for the last mile, and factories don't need to be built on a rail line.
3. We have a lot more women working in the workforce and two salary households. It's easy to live near where you work with a one salary household. If both adults are working, the odds are high that one of them will have a commute.
4. People switch jobs more frequently. I live in the DC suburbs and there are tech jobs scattered all over the place. I'm fortunate to live close to where I work and the odds are if I switch jobs it will mean a significant increase in commuting miles/time.
5. DC has a comparatively decent hub and spoke mass transit system that's great if you want to work at/near the hub (the Pentagon, for example), but lousy at going spoke to spoke, and even extending the spokes takes an inordinate amount of time/money/effort because the area is already developed by people/jobs that didn't want to pay to be downtown.
6. Once you've assumed that someone is going to have to commute to work no matter where you live (since you have two workers), you may as well choose the best possible location, which if you have kids is probably determined by the quality of the school system. While school quality is probably tied to housing quality, it almost certainly doesn't tie in to the job quality; expensive areas with good schools still need low-wage baristas and supermarket employees.
7. Immigration pushes communities to live close together even when they don't necessarily work close together. This was the case before World War 2, but immigrants are needed to do the messier low wage jobs, in part because they aren't used to the higher standard of living.
This all sounds plausible, but I can't square it with the actual history of urban and suburban redesign, especially in North America. Starting shortly after WW2, new car-dependent developments were built outside cities, in an entirely new pattern of development. At the same time, the interiors of cities were forcibly demolished to make way for highways and parking lots. This was mostly due to the increasing popularity of cars (and, depending on how charitable you are, racism); it was not a response to 2-income households (as far as I know, women didn't join the labor force in large numbers until the 70s) nor spread out employment (most people still commuted into the city; suburban office parks didn't become popular until the crime wave in the late 60s or later). For the same reason, living near work wasn't a priority, at least as far as I know.
Suburbs always existed, and always had plenty of businesses. In fact, they used to have more, because people had to be able to walk to the stores! Separating stores from housing is new, and in contrast to your statement that
> expensive areas with good schools still need low-wage baristas and supermarket employees
many suburbs actually ban *all* commercial development, and thus don't need any such employees at all. All of the stores are far away from the most expensive areas. One of the more well-known is https://en.wikipedia.org/wiki/Los_Altos_Hills,_California, after it got into a fight with Waze.
A large number of people don't want to live in the city. Once the middle class among them can afford cars, there's an incentive for some of them to commute via car. Once they start moving out from the city, there's an incentive to build stores and services for them, and an incentive to relocate jobs to be closer to the workers. This incentivizes more workers to move to the suburbs, and the cycle repeats. It's not hard to see why the end of World War 2 might have been a specific trigger for this process to take off; you have the (practical) end of the Depression / New Deal, you have a large number of young men coming out of the military that need jobs and places to live, you have heavy industry that's coming off of war production, and you have the US with the worlds largest functional economy.
If you start from an existing city with a reason for a dense core of jobs that can't go away (Wall Street for New York, the federal government for DC, port facilities for other cities), the city remains, though you get permanent traffic issues. If the jobs can go away (automaking for Detroit), the city hollows out and the jobs eventually go elsewhere. If you start with a new city built for cars, you end up with the sprawl of Los Angeles, balanced halfway between urban and suburban (and with it, permanent traffic problems).
As far as Los Altos Hills goes, it's the size of a large HOA development. At 10 square miles, yes, you don't need commercial development, because there's only 10,000 residents, all within 4 miles of stores. More importantly to my point, the children that live in Los Altos Hills almost certainly don't go to school with the children of the minimum wage workers that work in those stores. Likewise, the rich that live in American cities almost certainly don't send their kids to the same schools that the bottom half of the population attends.
Yeah, reducing households from 2 cars to 1, and replacing some number of car trips with other methods, can still greatly reduce costs. In terms of space (no need for 2 car garages and high parking minimums everywhere), in terms of maintenance (less wear and tear on roads, and fewer lanes), in terms of individual family finances (cars are expensive).
People lived in rural places before cars. How do we think they did it?
In rural areas, "before cars" is basically like the Amish do it. A town might be a built around a train station. In many places in the US, the old train stations were repurposed long ago.
But there's no reason to look that far back. There are more paved roads than there used to be. Cars had fewer features and went slower (but were more dangerous). Consider the original Volkswagen beetle, which was manufactured in Mexico a long time after they stopped making them here. And without highways, people with cars didn't travel as far.
Anyway, I think it's better to think about rural, suburban, and urban areas separately.
That's like saying 'people lived before electricity'; we know how they did it, they had a much lower quality of life. You can tell all the middle class families in suburbia that fresh vegetables are for the rich only (look up urban food deserts) but they're not going to like it because they're losing a lot, which may be money, freedom, or time.
I think because people who like cities (or just don't like cars) don't understand why the people that live in suburbs like suburbs, attempts to fix the situation don't work. I live in an area with one car garages and every available parking space is full, and not necessarily because they have two cars; half of them are using their garages for storage. We've got mixed use zoning with ground level parts set aside for shops, and half the spaces are empty probably because it's hard to run an economically viable business at the small scale, mostly because of labor; the ones that can stay in business are either high margin professional services such as tax preparation, or niche family businesses like restaurants and ethnic bodegas (whose customers come by car; the people that live above them generally don't shop there).
"They had a lower quality of life" isn't really an answer. A lot of things are cast as being *impossible* to do without cars, without knowing what actually happened. "It was impossible to read at night before electricity" is very different from "electricity is much better than candles." The difference is that unlike with electricity, there actually are very good alternatives to cars for a lot of use cases. No developed city has given up electricity, but many desirable places have walking, cycling, and transit alongside cars.
I have looked up urban food deserts, and last I time I did, I actually found that empirical research did not support the idea that they're very prevalent or a barrier to buying food. To the extent they do exist, it's because of things like high poverty in certain areas and the fact that cars are expensive and transit wasn't good enough. Lots of people live in cities and have no problem buying food; there are grocery stories all over the place.
Sounds hideous. And also just what people fleeing California are fleeing from.
Whether or not one thinks bike paths and small apartment blocks are "hideous" but massive parking lots and traffic in downtown are not, it is a simple fact that extremely spread-out infrastructure is much more expensive per person and if your town isn't super-rich, you're going to have to make tradeoffs.
Disagree. Cities are way more expensive than suburbs. Dense housing is always more expensive than low density -- the building standards are much higher, the construction more complex, the cost of construction much higher in an urban setting, and the cost of bringing in the food and water and shipping out the shit and trash much greater. People move to the burbs because it's freaking cheaper, even accounting for the increased time you have to spend in a car. People move to the city because they need the excellent economic connections, or love the nightlife, and not because it's cheaper.
I must say I'm confused. I constantly hear how car-dependent suburbs are what the rich want, and only Europoors who can't afford a car are stuck in the crappy city. But now you're saying the suburbs are cheaper and people live in cities by swallowing the additional cost of living. Which is it? If it's actually cheaper to live in the suburbs, why don't more poor people move into them?
In any event, you're comparing the cost of housing rather than the cost of infrastructure, which is what I explicitly was referring to. Building the same road, but longer/wider, obviously is more expensive, and is not impacted by "higher building standards."
But also, "dense housing is always more expensive" is just wrong. The cost of building vs land results in different optimal points, depending on factors like land price, see e.g. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3674181
And of course, you are ignoring endogeneity. Yes, more dense housing gets built in places where housing is expensive. Causality runs like desirable -> expensive -> dense housing. If you built a single-family home in downtown Manhattan, would it be cheap? Of course not; within a city, single family homes are almost always more expensive than, say, condos (e.g. https://wowa.ca/toronto-housing-market or the data cited in https://youtu.be/uJ1ePlln6VE?t=602). If you knocked down towers to build single family homes, would the price of housing go down? Obviously not (unless you built enough to destroy all of the things that make it a city people want to go to in the first place). And, in a similar vein, if you took a suburb and made it slightly denser, housing will be cheaper than it otherwise would be.
Actually, both the rich and poor end up in cities, but for different reasons. The rich need the connections and don't give a crap about the cost -- they can afford the $10 million pied-a-terre in a skyscraper with a great view, and to pay the valet or take a helicopter to the airport. The poor don't always end up in cities -- the banlieues of Paris or a certain chunk of California's Central Valley are excellent counter-examples -- but they are definitely there in large numbers, because the city-country wage differential rises faster at the low end than the city-country cost of living, perhaps because of the presence of the rich (and rich companies), who don't mind paying a premium for the menials they hire to service them.
The cost of infrastructure in the suburbs is completely dominated the cost of housing. Building roads is cheap by comparison. To replace the 40 houses on my ~1/4 mile block would cost ~$10 million, while building 1/4 mile of asphalt two-lane would cost maybe $500k. And I'm not considering the cost of land at all. (If I did the price to build the houses would triple.)
And I think you're wrong about the casuality arrow on the second one. Los Angeles didn't start out with skyscrapers, it started out as a village of grass huts. The location of cities makes them desirable, for any number of economic and geographic reasons, and then people move there, and the housing gets denser as a partial offset to getting more expensive (since denser housing reduces the demand pressure on price), and then people who lose the bidding wars start moving down (to much lower quality housing, i.e. the slums) or out, to the suburbs, where you can get the same quality for lower cost.
In no way could you characterize California in this way. Its mostly the opposite and highly car centric.
I didn't mean to characterize California that way, I mean to character "what people fleeing California are fleeing from." I admit to assuming that the people fleeing California with enough money to bid up the prices of rural houses insanely are much more likely to be denizens of Santa Clara County fed up with the density, longing for wide open spaces, and not rural Californian's who sold the raisin farm and are looking for another place just like home, except four states over.
Many things are expensive; that's the point of being rich to pay for them. The US is the richest society in history, so simply claiming that "spread-out infrastructure is very expensive" does not strike me as good argument.
Both medicine and tertiary education are also very expensive, and the US as decided to spend obscene amount of money on both of them to questionable benefit – at least with spread-out infrastructure you know what you get, and what you get is something a lot of people want.
Would you trade the last year of your life (the one spent on life support or dialysis or sleeping 20 hrs a day in an assisted living facility) for a nice suburban house? Hell yes, I would; and I'm hardly alone.
We certainly waste a lot money on stupid health care costs, but it's still the case that the amount of money required is out of reach of below-average or even average income towns. Look at the some of the numbers cited in https://www.youtube.com/watch?v=XfQUOHlAocY: The US is wealthy but nowhere near infinitely so, and it shows.
Moreover, housing specifically is very artificially supply-limited. Making more money available for it would just increase the price.
Low-density development isn't the result of people spending money to reduce density. It's the result of rules restricting development. There's no "trade" happening. Not even an exception for retirement homes, which don't strain the schools or cause crime. https://betonit.substack.com/p/no-retirees-in-my-backyard
The folk economics of housing is just not rational:
https://marginalrevolution.com/marginalrevolution/2022/11/the-folk-economics-of-housing-is-folked-up.html
This might explain roads but not schools. Rural areas use school busses.
I'm not sure I understand your comment. Were you replying to Alex, or to my counter argument to Alex?
I'm doubtful that increased density (to the point where cars and school busses aren't needed) would reduce expenses in education. I don't think Strong Towns blogs about this? Rural schools often have less funding, though.
Density reduces other costs that municipalities accrue such as roads, water, sewer. So more can be spent on education (in theory).
I don't think they talk about the cost of schools, no. There are some economies of scale which are best achieved with at least a medium size town, or several smaller ones working together. But I don't think schools have an extremely high floor in terms of cost, the way that other infrastructure does when you try to make it work for a very spread-out area.
I could see how it might help, by pushing the growth toward the big town districts and away from the little rural districts. The districts needing to expand their buildings and staff would be fewer, and they would be the ones with the best resources to do so.
With that being said, people with children often end up in the less dense areas due to bigger houses and more space for kids to run around, so maybe it wouldn't help at all.
And I feel like any changes to schools would happen years or decades after the initial changes to density. I also feel like it's much harder to predict than "higher density=cheaper housing". So I think school problems should be left out of the equation on this one, since they aren't directly related enough to be predictable.
The area started out as rural/farming, and has become more suburban in parts over time.
I think people that live in the residential areas in the two biggest towns could do some walking or biking. It wouldn't be convenient, and in some cases it would be dangerous, but I don't think it's the suburban housing developments that are spread out, for the most part. (I can think of a few exceptions that are along a highway between towns.)
The problem is the outlying little towns, the rural people on multi-acre tracts of land, and the farms. It would take many of these people hours to get to the nearest Costco, Target, or Home Depot by bike. Our significant winter snow and ice isn't conducive to biking, either.
You couldn't condense the area without abolishing about seven little tourist/residential towns and villages that are miles away from the nearest area with big box stores, a hospital, and other resources that people need. Even then, there would be the people living on the outskirts of civilization, on acreages in the woods, some of them farming a little or a lot. I don't think it would be practical to have multiple 20+ mile tram lines running back and forth between the main town and the smaller areas. Maybe more buses (Our bus system exists but isn't great) but probably not trams.
I agree with you regarding getting rid of single-family zoning. We desperately need more housing. But the structure of the area, not to mention the weather about a quarter of the year demands cars.
If walking or biking is dangerous, it sounds like a limitation of infrastructure that would be improved. And walking and cycling paths are cheaper to maintain because A) they carry more people in less space, and B) vehicles are heavy and cause more wear and tear. Same with winter cycling--I don't think there's actually that big of a correlation between temperature and biking.
Are the "small outlying towns" separate entities, or clusters of development within the same town? It should be possible to have small towns with a reasonable number of stores reasonably close. As one of those videos indicates, big box stores aren't particularly efficient for the city financing them.
I'm not sure how small these "outlying towns" are, but living spread out in remote places can be expensive, or require giving up certain benefits. It's simply not feasible to provide wide paved roads, central water and sewage, garbage collection, advanced medical care, etc. in a way that is convenient to every tiny cluster of people. They might require cars (although it sounds like the population should be small enough for small roads, or even dirt or gravel for the more remote and rural areas), but at some point people have to make tradeoffs. Like having a septic tank instead of a water treatment plant (this may already be the case).
If the area is growing steadily, then it's probably not going to be all rural forever, and it would make sense to consider planning for a future other than "more sprawl forever." It sounds like this has already started happening ("has become more suburban"). Suburban doesn't have to mean "car dependent sprawl."
Without knowing more details I don't know how feasible trams vs bus vs no transit would be, but whatever clusters of development (housing, retail, office, etc.) there are can at least be walkable once you arrive, which again greatly reduces maintenance costs.
Regarding winter and cycling:
My area is probably on the right hand side of the "winter severity" bell curve. We have long winters, sometimes from November till April, with lots of snow and ice. Often there is significant snow or ice on the road for weeks at a time. It's the kind of area you need a four-wheel drive and/or snow tires to be really safe driving in during much of the winter. Sometimes the weather will drop below zero Fahrenheit for weeks at a time. Other times, intense wind and snow will cause difficult travel conditions for several hours. It gets dark pretty early, too. I wouldn't want to be at work, have a major wind/snowstorm hit, and be stuck at work because I decided to bike that day. I am aware that bikes equipped for snow exist, and I am definitely aware that proper winter clothing exists, but I think, of the people who might choose to bike, only about 5% would be willing to do so in the kind of severe weather we get regularly during the wintertime. I know I wouldn't be willing to do so--I'd rather walk, to be honest, because I think I would be less likely to fall. I am not an experienced cyclist and feel my feet are more trustworthy than something I need to balance on two wheels. Maybe a trike or four-wheeled cycle would be more stable or appropriate.
Regarding small, outlying towns: Think clusters of houses and touristy shops along a state or federal highway. The individual towns themselves are actually pretty walkable, but the people there need to go to the big town at least weekly due to lack of reasonably priced grocery stores. Many of them commute to work in the big town daily. So even if the big town were to become more walkable, a significant portion of traffic would remain from the people commuting in. I think some of the outlying towns have city water systems, but many people in the rural areas have septics and wells. It's actually the big town experiencing difficulties with its water infrastructure at the moment.
I am with you that the big town probably needs to become more dense and walkable, it would be great if we stopped using up all the land along the highways for houses, and we could use a boost to our public transit. (There's a similar area nearby that does buses pretty well, and I think we could really use that). However, because of the large number of people that need to drive in from outlying areas, the significant winter weather, and the large volume of tourists who drive, the big town also really needs to expand its capacity for vehicles.
There's famously a town in Finland that has quite a lot of winter biking (https://www.youtube.com/watch?v=Uhx-26GfCBU) but I understand if that's not a priority. I still am not sure if there's really a correlation--lots of places high in the Colorado mountains get snow from September through May and have extensive bike paths--but they also tend to have excess money. Even if they're only used part of the year, road damage is largely because of use rather than weather, so it still might save enough money on road maintenance to be worthwhile. But I'm definitely not sure.
If enough people are commuting into the big town, then a commuter tram line or something might make sense. Not sure what sort of numbers you're talking about. If people do have to drive in, then the city itself can remain walkable by having cars parked on the outside or underground, as described e.g. in https://www.youtube.com/watch?v=ZORzsubQA_M&ab_channel=Vox
Without more details, no one can figure out why the big town is having issues with its water system. Did the population growth just outstrip capacity?
On ice, it's a lot easier to fall over when walking than biking - the bike is self-stabilizing and nothing is pulling you sideways, while walking is a process of falling forward and catching yourself with a leg in front where lots of things can go wrong. The exception is making tight turns, of course.
I agree with the rest of your argument, though. In particular, biking over snow is a huge pain in the ass without fat tires.
you have a ton of growth and a giant new tax base to make the town better. Take it from someone who has spent time in areas that are literally dying, with abandoned homes and a shrinking tax base, these are good problems to have!
Yeah, up until 2020 I responded to complaints about the area growing with "Well, at least it's not economically depressed." The wildness that happened during the pandemic changed my attitude somewhat, but you're right. Thank you for the perspective.
This sounds like you could be describing almost any small to midsize growing municipality in the USA.
I say this a little in jest, but really: I’ve moved between several of these areas, and at this point I can predict the mood of the populace, government, and immigrants by heart. Same complaint about vacation rentals, even as they dig their heels in against new multi family/dense housing, same urge to hike property tax rates despite little evidence that current revenues are being spent productively or that there is a budget shortfall (versus an inability to see the results of spent money fast enough to satiate the spenders).
I would urge you to be skeptical. People move because the area works. No town ever became a hellhole because a bunch of people decided they loved it there. Lots of folks like to move into a desirable area and then pull up the drawbridge, and trumpet loudly the whole time that they’re protecting the place.
On the other hand, an area can become a hellhole for one group of people when another group of people move in (e.g. rich people / poor people, in eithe direction).
People tend to instinctively fear change, so most panics are overblown, but just because the people who move in like the place doesn't necessarily mean they can't have significant negative impact on people who already live there and have different tastes or needs or means.
Thank you, this is encouraging! I probably need to stay off the stupid local facebook groups, where the "pull up the drawbridge" call is pretty loud, and trust the market to fix the housing problem. Eventually.
Anyway, you described the attitudes of the various groups here perfectly. Glad to hear that it's tale as old as time and might be less indicative of real, permanent problems than just growing pains that will probably settle out.
I have lived all my life in a small town in California, and this happened here about 30 years ago. It was partly precipitated by the state expanding a nearby prison, providing lots of jobs and reasons for people to move here, and partly by our proximity to Los Angeles (within distant commuting distance).
I don't have any simple solutions to offer, because of course there aren't any. But people who look dismissively on California should awaken to the fact that many of California's policies are the result of attempts to deal with some of the issues you raise. We have 40 million people. Many of the low-key, lackadaisical approaches that rural states use aren't enough here.
I remember being on a trip to Montana not long after California introduced vapor recovery hoses on gas pump nozzles. They were kind of unwieldy and an inconvenience, and some people at a gas station in Montana were scoffing about them. As well they could, in a huge state with an insignificant population. But California didn't introduce vapor recovery legislation just to annoy residents -- officials were trying to deal with very real air quality problems. Duh.
And if you have overcrowded schools in need of new facilities and infrastructure, then taxes gotta go up, or ya gotta pass a school bond to raise money, or something different.
As derided and mocked as much legislation is, the vast majority of it arose in an attempt to solve a problem. California law requires swimming pools to have a fence and gate because far too many innocent children were drowning. It wasn't just legislators trying to inconvenience pool owners.
Good luck in your changing rural community, and remember: many laws originate to deal with real problems, and growing populations DO cause problems.
Yes, exactly. We bought property in part because the roads were not paved, and we intended to ride our horses from our place to a nearby established network of horse trails. The summer after we moved in they paved the road for gravel haulers for a big construction project. That’s over now, but there is a tempting straight stretch of over a quarter mile right in front of our driveway that a variety of people use to test their tires, their engines and their brakes. I took a young green draft horse out for exposure to traffic. An idiot in a large pickup came around the corner, and accelerated towards us, I guess hoping for a show. This sensible mare did what she had been trained to do in an unexpected situation, she froze. The pickup slammed on his brakes and skidded around us, cursed us roundly, but drove on. We don’t take our horses on the road any more. The last road millage failed, the pavement is deteriorating steadily, but the motorcycles keep roaring by. Ugh.
Yeah, I realize that part of the "Live and let live" idea only works if you live where somebody's action isn't likely to directly affect another person because of close proximity. My area is likely to swing left, and I realize that is partly for a good reason: More management is needed.
With that being said, I think there's a way to get things done without passing laws. Good incentives and social pressure can encourage people to make the right decision without actually being compelled to do so, and I think the right thing to do most of the time is to suggest rather than order. What I don't want to see is a bunch of permanent laws passed that solve a bunch of problems that the area has at the moment and end up causing over-regulation in the long term.
My local school, which I attended, has been in a fast-growing neighborhood. They passed a bond about 6 years ago, built a huge new wing onto the school, and suddenly they're full again. They tried passing a bond again, but in an area where property taxes are already rising, nobody was surprised when it failed. The people in the area need a school, but they also need not to be paying crazy taxes. I suspect this will end up with band-aid solutions like temporary classrooms that will be worse for everybody: The taxpayers will pay more in the end once something eventually does get passed, and kids will end up in subpar temporary buildings (our weather is probably not great for any kind of temporary infrastructure.)
Thanks for your thoughts, and the reminder that places don't stay the same forever and that regulation has a proper place and purpose.
What sorts of incentives and social pressure do you apply to the people who ban “To Kill a Mockingbird” from the school library? I am very interested in any suggestions.
Banning it because it makes the black guy look good, or banning it because it makes the woman look bad?
Horshoe-theory, baby. It's horse-shoes all the way down.
(My username is a TKAM reference, BTW. I am not unusually fond of pickles. Well, I am unusually fond of pickles, but not enough to name myself after them on the internet.)
Book banning is an interesting one to me. I am conservative(ish), and I don't know a single person in my circles who would object to something like "To Kill a Mockingbird", 1984, or "Huckleberry Finn." Maybe they would argue that elementary aged children shouldn't read these books, but all three were part of my high school curriculum, along with "Lord of the Flies" and "The Kite Runner", both of which have some sexual content. Nobody complained. The class was given the option to do an alternate assignment to "The Kite Runner" and as far as I know, nobody did.
I think it's a valid argument that young children shouldn't read certain books. "To Kill a Mockingbird" was on the grown-up books shelf when I was young, and I heard that it was good and asked to read it when I was about ten. I was told no, to wait, and I did wait, and I think that was good because I was much more ready to process a story about a false rape allegation, horrific racism, hypocrisy, poverty, and domestic abuse when I was fifteen than when I was ten or eleven.
So with all that being said, I would do two things if I encountered somebody campaigning to remove TKAM or similar from a school library or English class curriculum.
1. Talk to them and explain the value I believe the book has, and why removing books from libraries is not a good thing to do on principle. The social pressure can come in the form of me, as somebody in their social/political circle, disagreeing with them openly and strongly.
2. Argue that maybe instead of removing the book from the curriculum entirely, it can be moved up a few grades. This is probably especially helpful for books that are being challenged for sexual content, like 1984.
If you are currently working to keep TKAM in a library, but you aren't in a position to do option one, maybe look for the conservatives in your area who aren't for banning books, and ask them to do option 1 for you? I know it's hard to find the reasonable people when there's a sect of squeaky wheels being very noisy about why TKAM should be banned, but I'm pretty sure that's a pretty fringe position in most parts of the US, and most of your local conservatives, and all your local libertarians, are just as annoyed as you are at the squeaky wheel people. I know I would be.
Are tax rates linked to income rates? Changes in average valuation of houses? Do tax rates have a built in fudge factor for the fact that it will cost fat more today to repair the main road and pave that alternate dirt road the now has more traffic on it than the main road, than it cost to pave that main road originally way back whenever? Small rural infrastructure gets overwhelmed these days because no one remembers when it wasn’t there at all, so they take it for granted. This requires lots of communication on the part of local officials that no one wants to hear, especially if the people who should be communicating this information don’t acknowledge the truth of this.
There is no sales tax, so the local taxes are the state income tax and the pretty significant property taxes which are based on house prices. (Whether that's your individual house price or the average in your neighborhood, I am unsure--not a homeowner!)
It's funny you mention the main road and alternate dirt roads. I can think of a couple pairs of of roads like this. Not quite the situation you describe, but people are finding creative shortcuts through residential areas or farm fields to get where they need to go and avoid traffic.
That's a good point about the people who actually need to be doing the communicating. I feel like the city and county law enforcement are the ones who are most vocal about the strain on their resources, while some of the rest of the leadership are pretty quiet. The road complaints are coming from citizens, including people who have to drive in the most congested part of town, and then people who live along the shortcut roads and are trying to raise kids who want to ride bikes and stuff along an increasingly high-traffic, high-speed street.
Yup, we already have some fun culture war stuff going on, and have since the early Trump years. A family member and I passed a picturesque downtown park, and my family member asked "Oh, what do they use that park for?"
"Protests, mostly."
(Left wing, right wing, occasionally devolving into minor violence, like some random person pepper spraying protesters. I find most of the protests themselves to be pretty meaningless, but the incidents and negative dialog that arise from them are making me really concerned.)
I never made the connection that the people moving here would be more conservative in the pro-policing sense, but that's probably true. Thanks for your thoughts.
Reminds me of the park here. Back when destroying statues was the hip way to protest racism, the local BLM affiliate pulled down and destroyed a bronze statue of a civil war soldier. Of course, this being north of the Mason-Dixon line, the statue was for and of a volunteer regiment of boys in blue, but those protestors at least demonstrated how much they hated racism.
Yeah, the people moving here are the fancy houses sort of people. Some of the fancy houses are clustered in resort areas, which might cause some problems for those of us who don't live in resorts, if police start focusing on those areas. But I think you make a good point. Probably underfunding, and not public opinion, is the biggest threat to good local law enforcement here.
For the most part, it's pretty much just normal protests. The local branch of the Women's march every year. Some protests after George Floyd was killed. Some people protesting abortion by hanging up baby clothes. Some people protesting Dobbs by dressing up as handmaids from "The Handmaid's tale". (The pepper spray incident was against those protesters.) I don't know for sure, but I think the local high schoolers tend to be a good percentage of the left-wing protests.
Our area has historically had some weirdos of the far-right variety hanging around, and occasionally something crazy they do sparks protests and controversy. I am against far-right weirdos and believe their philosophy is awful and dangerous, but I honestly think the best way to handle them is to not feed the trolls. They just need to be ignored and/or laughed at until they realize that the far right weirdness is stupid.
Not your keys means not your money. When will people ever start to learn this?
And handling your own keys is simply unrealistic behavior for a huge, huge % of humanity. Hence why some of us have been shouting 'hey this whole crypto thing is never going to work out' for years now
I like Argent, is there some reason Argent is unrealistic or can't work in real life?
If in $CURRENT_YEAR you're so thoroughly computer-illiterate that you can't handle your own keys, then I don't know what to tell you.
Maybe there needs to be more economic/evolutionary pressure towards computer literacy? ¯\_(ツ)_/¯
(Well, come to think of it, custodial exchanges do provide this pressure, each time one of them bites the dust...)
Dude. The bell curve has two sides. Half of humanity has an IQ under 100.
And even among the computer literate, people are busy. I'm sure I could figure out how to manage my own crypto <mumbles> <mumbles> thing, but I have other shit to do. When the hype got really high I stuck a few hundred in Coinbase. Which turned out to be a mistake from an investing point of view, but they haven't ripped me off yet.
The left half of the curve doesn't have much money to invest in crypto, though. Or at least, not for long.
Payday Loans.
Most of the population will never be capable of handling their own keys.
How many kids do you have, oh so well adapted mmirate? or do you measure adaptation in imaginary economic units? Perhaps there is a cryptographic sequence out there that calls you father? Inheritor of the earth no doubt.
I don't think you could demonstrate any better why crypto will remain fringe forever. Perfect response, thank you
I don't know how to hold or own crypto. I'm sure I could learn it. But I have enough stuff on my plate. I've already got a few bank and financial accounts to keep track of, I've got medical visits I need to stay on top of, I have government forms and taxes that need to be filled out promptly and paid, and a bunch of random tech accounts I need to manage for my job. Just making that list of things feels minorly stressful cuz I'm probably forgetting other important stuff.
Transaction costs in life add up. Especially when those costs are my time and level of frustration. If someone earns ~$100/hr then a 1 hour setup is 900 dollars cheaper than a ten hour setup.
Everybody here seems to be casually talking about FTX being a big fraud. My impressions (mostly based on reading Matt Levine) were more focused on extremely poor risk management rather than explicit fraud.
Does anybody have a good explanation of the fraud angle, as opposed to the "accepted your own stock as collateral against loans" angle?
I believe loaning out the deposits was against the terms of service so that is what could be called fraud.
Also SBF is reported to have had a back door that allowed him to change accounting numbers with no audit trail. Highly unlike he wasn't using this for fraud!
It is unclear if it was always a fraud. Many things point to "probably", but there's no smoking gun yet. It is 100% clear that SBF committed crimes sometime during the summer and has been lying since then.
I think it was unlikely to be an honest mistake even when the last Matt Levine article came out, and things have evolved since then. I heard rumors that someone found a backdoor they used to transfer the funds without risk management people noticing. Also, the official company account tweeted that they were allowing withdrawals from the Bahamas only in order to comply with a request from Bahamanian regulators, and then the Bahamian regulator tweeted that there was no such regulation and they had never asked FTX to do this - I think this might have been a ruse to let insiders withdraw first without provoking suspicion.
A lot of this is still rumors but I would really really like to believe there was nothing outright illegal here and I still think there's a <5% chance that's true.
Ok, yeah that stuff sounds way more explicitly fraudulent instead of just stupid.
Thanks for the information.
"Does anybody have a good explanation of the fraud angle, as opposed to the "accepted your own stock as collateral against loans" angle?"
Maybe it didn't start out as fraud, but it sure looks like it ended as one.
Funds to the tune of USD 600M were drained from users on day of filing bankruptcy, claimed a "hack" by FTX. Very convenient timing on that one, most likely an inside job. Facilitated through an app update, which is decidedly not trivial to do from the outside, let alone on short notice.
Yesterday, SBF and FTX staff detained in the Bahamas while trying to get to Dubai, possibly under the assumption of a lack of extradition treaties to the US.
Ok, that "hack" sounds like some pretty straightforward fraud.
Matt makes the distinction between illiquidity and insolvency. The fraud was that they told everyone they had 1:1 deposits, full liquidity. In reality, they took that cash and loaned it to Alameda to cover for its bad bets. They were clearly hoping that they could make new bets with this cash, make the money back, and pay back into the depositors accounts. But... it blew up.
They clearly weren't supposed to loan out depositors cash. They didn't own it, they were just supposed to be holding it.
So my understanding of Matt's explanation is that it was just obviously not the case that they were holding assets 1:1 in the strictest sense and nobody who knows how a future works could have thought that they were.
A big part of their business was offering leverage and futures and things that inherently involve borrowing and lending assets.
I guess the main fraudulent claim was that they weren't using the funds for any lending outside of the sort that is inherently involved in offering futures or margin loans?
There is a big difference between making a market in something and taking a position. Market makers will have some exposure, but the point is to get the customers to hold the risk. At least in theory, the market should be able to go anywhere without dragging the market maker with it. Maybe this market is too volatile, but more likely someone goofed.
Yep, doing risky things with your customers money to try to pay off your own debts, and telling yourself that you will have enough later to pay back your customers is fraud based on self- delusion. Or just fraud. Geez. Large amounts of money makes people do crazy, stupid and illegal things.
I'm sympathetic. You put a lot of your energy into this effective altruism thing, and now some significant part of it has been proven fraudulent. Not nearly all or even most, but a big and visible part, and that will no doubt give the NYT/New Yorker crowd another excuse to go after EA and rationalists and more ammo to make you look bad.
Emotionally, I can only say: this is a HUGE problem in charity, to the point there are multiple organizations giving ratings to various charities. So whatever they try to say, this happens a lot in the charitable world. Consider the base rate, and cut yourself some slack.
Just because a criminal donates to a charity doesn't mean the charity itself did anything criminal.
Are we overcorrecting for the FTX situation? I admit my belief in seemingly-ethical founders is shaken. But at the end of the day this is an N of one. I’m really not sure how to update my beliefs.
Yeah. I didn't change my opinion on Jews because of the Madoff thing or the viability of natural gas because of Enron. Not sure why FTX should change my view of EA or crypto (which was already low).
Yes, everyone always overupdates on every big news story. But it would be offensive to mention that now so the only option is to wait until the panic dies down and then double-check all the updates we made.
I understand the reasoning, which both you and Eliezer have argued, that says that if traditional finance types couldn't predict FTX's downfall, who are we to say that we should have been able to? There still seems to be something missing there. Consider the following:
1) The picture of the investors' reasoning described by a profile those investors themselves published: https://web.archive.org/web/20221109230422/https://www.sequoiacap.com/article/sam-bankman-fried-spotlight/
> What Sequoia was reacting to was the scale of SBF’s vision. It wasn’t a story about how we might use fintech in the future, or crypto, or a new kind of bank. It was a vision about the future of money itself—with a total addressable market of every person on the entire planet.
> “I sit ten feet from him, and I walked over, thinking, Oh, shit, that was really good,” remembers Arora. “And it turns out that that fucker was playing League of Legends through the entire meeting.”
This, um, does not strike me as the sort of reasoning that rationalists would endorse, at the very least.
2) Let's carry your hypothetical forward and say that some traditional finance people had seen something awry at FTX. What would we expect them to have done to capitalize on that? This is related to the a classic asymmetry in that you can't short a private company's stock, so investors can go bananas with too-high valuations and no one can do anything about it. A skeptic also couldn't just, say, invest in Binance instead, since they aren't pure competitors; as CZ acknowledged, FTX's downfall hurts the whole industry. Maybe there's some other complicated maneuver you could have performed -- I'm no crypto expert -- but it seems like the sort of thing where the best move is not to play.
3) If we were actually "trusting the experts", why didn't anyone explicitly say so? I don't remember any pieces where someone actually grappled with the volatility of crypto, and made the case to the EA community that "FTX is different, you can trust them" or "I was skeptical until I saw that so and so investors had done some due diligence, and nothing they invest in ever collapses." (That sentence sounding preposterous is my previous point.)
4) There's a much more obvious explanation that we really need to wrestle with and which Scott's post inadvertently admits: We blindly trusted Alameda/FTX because we liked the people there. They speak our language, they come out of our own community. These relationships blinded us to the possibility of the disaster that's unfolding now.
1. Disagree re: Sequoia. Their reasoning was cringe, but in fact SBF and co created a $30 billion company. My impression is most of that was real work and then the fraud started around 6/2022. So he's either someone who's really good at making money and ethical, or really good at making money but unethical. I don't know why him playing League of Legends should shift me into the second category.
2. If it were me I would have shorted Solana, although I'm not an actual trader and I don't know if there would have been timing issues. Certainly they could have shorted Solana once CZ dumped FTT, or when the crisis seemed to be sort of starting but nobody was sure yet.
3. This is a weird argument. If tomorrow it turns out that Neptune doesn't exist, is it fair for you to say you were just trusting the astronomers? Then why didn't you say "I was just trusting astronomers". When you do consensus things that everyone agrees are fine, you don't need to lampshade that you're doing that.
I think people are getting screwed up here because there are two slightly different questions. First is "was it bad to deal with crypto people given that crypto has some well-known inherent volatility and scamminess and ethical issues?" The second is "was it bad to deal with FTX given that they were a giant scam of the sort nobody predicted beforehand". I think the first is a reasonable debate and the second is "nobody predicted it so we didn't either". But I think if this debate happens now those two questions will inevitably get really confused.
4. I think if you had asked me a month ago, I would have been indifferent between trusting FTX, trusting Binance, trusting Coinbase, trusting Facebook, and trusting Wal-Mart, all for mostly the same reasons (they are big companies that the market seems to like). Maybe FTX would get a small bonus for having one person I knew + apparent EA values, and maybe a small malus for being in crypto, but overall I think it would just be my big-respectable-company prior.
1) I don't think the argument is about his playing LoL, but about the fact that the investors portrayed his playing LoL during a meeting as a positive. The reasoning presented is not epistemically sound. This might not mean much, maybe puff pieces like that are the norm and the discussion of finances are left for dry reports, but the argument presented was not that his playing LoL should shift our view. If I were to make such an argument, it would not be based on the fact that he plays LoL but on the fact that he does so during investor meetings.
4) I think it's obvious that we should trust Wal-Mart more than the others (for relevant values of trust). Wal-Mart is a mature and stable business, the others are less proven and more volatile. Wal-Mart is publicly traded, which as I understand requires more disclosure. I don't see the argument for treating FTX as being in the same class as Wal-Mart when it comes to using the market to justify trust.
Wait, you're including *Binance* in the list of trustworthy companies? Binance of the "Bond villain compliance strategy" (e.g. https://twitter.com/patio11/status/1411869320917884932)? That should at least make you cautious, Binance isn't Sci-Hub.
Putting Coinbase and Wal-Mart in the same category is already optimistic, the time-to-failure of crypto exchanges and of supermarkets is rather different!
> My impression is most of that was real work and then the fraud started around 6/2022
I don’t really know anything about this situation specifically but in the aggregate, it sure looks a lot like a classic margin call scenario to me. You have a highly leveraged position based on assets that suddenly decline in value leaving you exposed and you borrow assets from somewhere else to patch it up because you’re sure that the assets will recover and you’ll be able to make it all nice again. and then it doesn’t happen that way..
I have a feeling it’s going to turn out to be more hubris than Fraud.
Re 2: Sure you could have shorted either Solana or FTT. But it's rarely a good idea to short a company just because you know something is off. Especially with assets as volatile as hyped up crypto tokens. Even if you know for sure that the company is going to 0 within 10 years (and the conditional likelihood of this given something being off may not be super-high). The reason is that the token may pump a lot before that happen, like 5-10x or more. And unless you have a lot of capital to spare to avoid liquidations, you may face much greater losses than you hope to gain from your short.
This is a big problem when shorting volatile assets over a long time horizon. But it can also get bad even in short time-horizons. Eg personally I experienced this during the Terra/Luna collapse. I made good money shorting their stablecoin Terra. But ended up getting short-squeezed on Luna when, right in the middle of the collapse, the Luna price suddenly increased 500%+ in less than an hour.
Markets can remain irrational for a lot longer than you can remain solvent.
Sequoia has very different incentives. If they assess FTX as having 99% chance of imploding into a black hole of fraud and swallowing all their money, and 1% chance of growing 200-fold, that's an okay bet for them. They probably have appropriate techniques for segregating themselves from criminal investigations into their fundees; they make many different investments so they are fine with most of them failing as long as on average they are profitable. If the cornerstone of your philantopic ecosystem has 99% chance of collapsing, that's not great, though. Both because there is no large pool of other bets to make the actual outcome converge to the expected value, and because a collapse is not zero-value (like it probably more or less is for Sequoia) but can be significantly negative; there is PR damage, loss of trust, emotional damage, talented and motivated people ending up in debt etc. So freeriding on a VC firm's evaluation doesn't really make sense.
I think the crypto aspect of this is a much bigger deal.
1. That $30 billion valuation you cite was entirely based on the speculative prices of FTX's crypto assets. I think there are many reasons to argue that it "wasn't real" in the same sense that Ponzi scheme valuations aren't "real" value, and that we could have suspected ahead of time just by it being crypto. You can't say the same about Facebook or Wal-Mart.
2. I'm not talking about anything anyone could have done once CZ dumped FTT. I'm talking about things finance people could have done PRIOR to EA's gatekeepers like yourself essentially saying "yes, it seems like a good idea to fund a bunch of new charities this way".
3. There's not a consensus of any sort that crypto is a reliable business of the sort that you want to build a large number of charitable organizations on top of. Tons of finance-savvy people think crypto is a mostly a scam; no astronomy-savvy people thinks Neptune is.
I agree there are two slightly different questions I'm slightly motte-and-bailey'ing: "Is crypto too volatile to build an entire apparatus of charity organizations on?" and "Did SBF/FTX do something unethical on top of being in a generally risky business?" But they're also linked: By Eliezer's current guess, the general crypto downturn of mid-2022 caused a bunch of trouble for SBF/FTX that led them to take riskier/unethical actions rather than eat the losses.
4. I agree with this characterization of the prior calculus, I just think the malus for being in crypto should have been large, especially in the uncertainty. And yeah, that'd count against Binance and Coinbase too. But my big point is that given that elevated uncertainty, additional investigative information had a lot of value to EAs considering building a bunch of new charities out of SBF's money.
a There might've been some pro-SBF bias specifically.
b There likely was pro-crypto bias of significant chunks of the community in general.
c There likely was a recency-bias re no hot $30-billion valuation from top VCs decacorns spectacularly blowing up recently, making charities and apparently Scott underestimate the distinction between stable old public companies and volatile private companies in a risky sector.
I'm pretty sure a) didn't matter that much, EA was (unfortunately) taking money from a bunch of other crypto guys.
b) and c) likely did.
But that aside, I don't see the argument against EA piggybacking off Sequoia assessment of FTX, given it certainly had better expertise and better information.
Without knowing the extent to which EA orgs made their futures highly dependent on FTX money (do we know?), it's hard for us to judge whether their course of action was appropriate given the general riskiness of VC valuations, in crypto in particular, and after the crash of this year in particular, or not.
It's fair to ask if EA orgs treated crypto in their risk assessment - it's not fair to expect them to have taken a stand on crypto in a way society in general/many other orgs didn't.
I don't crypto - was it actually possible to short FTX? That is, if traditional finance types started to investigate FTX, could they have made money by alerting the world to the fraud?
Very easy. You just short FTT token on another futures exchange.
That's pretty much what the CEO of rival exchange Binance did. He saw some things in their financials that were sketchy and so he dumped his whole stash of FTX-associated assets.
What is that really a short, or was he just getting those assets off the books because he thought they stank? A short would imply he intended to buy them back after their collapse.
It was/is a private company, so you couldn't short the stock, but it probably was possible to short FTT and other crypto tokens.
Can anyone recommend a good english-language history of the reign of terror following the French revolution? I want to learn more about the period than I got in high-school, but I don't have enough background in the area to know which are the reputable histories.
Simon Schama's "Citizens" is an excellent and thorough overview of the French Revolution as a whole, from its antecedents through to its longer-term sequelae.
Was about to recommend this when I remembered to check if someone had beaten me to it. +1.
The whole thing is well written and interesting, but if you want you could just skip to The Terror.
Thank you! I'll check it out.
There are English translations of “L'Ancien Régime et la Révolution” by Alexis de Tocqueville.
Thanks for the recommendation!
If you prefer fiction, to give you a sense of how people felt and how they self-justified things, you might want to try "A Place of Greater Safety" by Hilary Mantel (the same woman who wrote the Wolf Hall trilogy).
Looks interesting. The Wikipedia page is intriguing!
'Revolutions' podcast (the same guy who did 'The History of Rome') has a French Revolution part, and you could either work your way through it all or find where your interests start.
Thanks for the recommendation!
If I'm understanding, it sounds like Scott here twice refutes the standard EA notion that there are no diminishing returns to money used for charity, in his lobbying example and his opening sentence to paragraph 2:
>The past year has been a terrible time to be a charitable funder, since FTX ate every opportunity so quickly that everyone else had trouble finding good not-yet-funded projects.
Does anyone still want to defend the notion that returns to money for charity are linear?
Yeah, in retrospect I elided things in that sentence - this is mostly true for EA style charities, and true most of all for AI alignment charities. There are only a few hundred people in the world qualified to do AI alignment, and no obvious quick ways to turn more money into more alignment except waiting for those people to build more infrastructure.
I think it's much less true for eg GiveDirectly, a group that gives money directly to poor Africans, of whom there are a pretty unlimited amount.
I think different parts of EA are thinking more about AI alignment vs. GiveDirectly and so have different opinions on this question. If you're just looking for any charity at all, then the no-diminishing-returns argument holds at least up to the number of poor Africans.
> and no obvious quick ways to turn more money into more alignment except waiting for those people to build more infrastructure.
Has anyone considered "buying out as many existing AI companies as possible"? If enough money could be gathered, it sure seems like an effective way to turn lots of money into "Fewer people are doing dangerous vs. 'safe' AI research".
Yudkowsky's favorite example of unsafe research is Facebook's lab. Gathering enough money to buy out Facebook-level corps is probably not near-term feasible.
Probably not, but it sure is a situation where, in fact, arbitrary additional amounts of money could do a lot of good if we actually had them!
While reading it I did find myself wondering if I was reading the open thread! Glad to hear ACX grants are unaffected.
Anyway, I have slowly been building a project for two years and I'm finally happy enough to put it here: www.i3italy.org . It is a community interest company whose main goal is to be a guidebook aimed at Italians that live in England, written in Italian. If that seems niche, it is - but these are 520,000 people, many vulnerable and hit by Brexit, then Covid-19, then Brexit again, and that is without considering the mess that Italian politics and (some) institutions have been in the past few years.
Somehow, my journey of self-discovery on how to use 10,000 out of my 80,000 hours ended up realising that this is the best way to maximise my free volunteering time impact on the world... taking into account my skillset, knowledge, free time and willingness to learn new skills (so this answer is probably very unique to me). Really hope I can bring it to its full potential: wish me luck, ACT/EA crowd!
Hi Cesare, just wanted to say... great work! It is not at all relevant to me but I browsed the site and have no doubt that it'll have a positive impact for Italians that are located in England. Keep pushing, keep improving, good luck!
I'm skeptical that these sorts of scandal markets will help much. As you said yourself:
> There’s a word for the activity of figuring out which financial entities are better or worse at their business than everyone else thinks, maximizing your exposure to the good ones, and minimizing your exposure to the bad ones. That word is “finance”. If you think you’re better at it than all the VCs, billionaires, and traders who trusted FTX - and better than all the competitors and hostile media outlets who tried to attack FTX on unrelated things while missing the actual disaster lurking below the surface - then please start a company, make $10 billion, and donate it to the victims of the last group of EAs who thought they were better at finance than everyone else in the world. Otherwise, please chill.
Manifold does a good job of aggregating public info, but I have yet to see a market that clearly figured out something important before the public did. Usually it's markets that jump around based on news articles or Tweets that some Manifolder found, not the other way around. For example, there *was* a market on FTX, and it got blindsided too:
https://manifold.markets/NathanpmYoung/will-ftx-go-bankrupt-before-2024
Maybe this'll change with more volume, but I'm not optimistic.
Still, just in case:
https://manifold.markets/IsaacKing/will-scott-alexander-be-found-to-ha
:)
Oh no, you're going to realize that I'm not actually writing blog posts, just putting a bunch of individually-meaningless characters together on a computer screen and hoping nobody notices!
I'm not sure how to interpret this reply, sorry. I can tell you're being sarcastic, I assume because you disagree, but I don't know what you're actually trying to convey, or why exactly you disagree.
Joke, sorry.
I think this can become a useful schelling point for information. If you want to do a tip of, just ask someone to push the market up a little.
This is been going on for a long time. Touts.
I have to say I really don’t get prediction markets. meaning why they are any better than placing a bet on something with Ladbrokes or using the stock market if you want to play with money. I failed to see how any predictive value can come with such a market, because if it can be gamed by an individual for their own interest it seems to lose all utility as a predictor of anything.
I'm confused by what you're suggesting. If you have private information, why ask someone else to place the bet rather than do it yourself?
Because manifold isn't anonymous.
Here, I made a prediction market on whether prediction markets will ever turn up anything useful.
https://manifold.markets/IsaacKing/will-any-prediction-market-clearly
Yeah, these tiny prediction markets don't do nearly enough to incentivize the sort of investigative work needed.
How much are you, J Random Scammer, willing to spend on the reputation market to keep your name pristine in order to keep the scam going?
I don't think this is about incentivizing investigative work. I think there's a question of "did Joe Average Charity Founder err in accepting FTX money because it should have been obvious to him that FTX was a fraud". If the "is FTX a fraud?" prediction market is at <1%, that shows it should not have been obvious to him. Maybe FTX was a fraud, but in the way where it was investigative reporters' responsibility to establish and not the way where ordinary people should have known it from already-available information.
I mean, if the market is tiny then there's also very little information there for Joe Average Charity Founder to use. The FTX bankruptcy market was at 4% for over a month with no activity. How is that supposed to have helped Joe?
I think it's proving the interesting and nontrivial statement "there isn't enough information to conclude or even inspire interest in a prediction market that FTX is fraudulent", which is the question I'm interested in. I bet that market rocketed way up as soon as information came in!
> I bet that market rocketed way up as soon as information came in!
Doesn’t that make it a reactive market rather than a predictive one?
Indeed it did. But you phrased these prediction markets as "the solution" to questions like "How can I ever trust anybody again?” I suppose I don't entirely understand what you meant by that question, but I still fail to see how "well we set up a prediction market!" is supposed to solve anything in that vicinity.
To put it another way: Let's say we learned that there was fraud on the other side of EA. Suppose it turned out that AMF leadership was, say, distributing ineffective, much cheaper bednets and pocketing the difference in costs. Obviously we'd also feel betrayed. But our reaction wouldn't be, "Well, I checked a prediction market and no one else could see this coming, so I still feel good about my donation."
Instead, we would look to the investigators we actually have and ask how they could have gotten this wrong. We'd be combing over GiveWell's extensive, published work and asked how they missed this. We'd expect GiveWell itself to issue a comprehensive report and identify the bad actors at work. And even then, the science would probably still also be solid and with the right personnel changes, AMF could go back to distributing the actual bednets, limiting the damage.
In fact, it's because of this extensive public ledger of GiveWell research that I have difficulty imagining something analogously extreme and destabilizing occurring at AMF.
Obviously there's a lot more potential for scams and other big problems on the charity-receiving end of things than the charity-giving end, so that's why we have GiveWell but no analogous organization -- I guess you would call it ReceiveWell? -- for potential charity recipients to vet the sources of their funding.
But this whole experience shows that we really should take a long look at whether ReceiveWell would be worth setting up.
I see, so you're using prediction markets more as a form of social proof to say "nobody else saw this, so you shouldn't be upset at yourself either nor should anyone else be upset at you", rather than a way to actually predict people's behavior. I didn't understand that originally.
You might want to join me in asking if Manifold can add a "bet anonymously to X charity" option which I think has all the correct incentives.
Sorry, explain more?
Right now Manifold doesn't allow anonymous bets. This means that if someone has insider info and wants to turn a profit correcting a market, they have to at least attach a pseudonym to it, which is a slight disincentive. I'm not sure how much of a disincentive it actually is considering the ease with which one can make a new Google account, but it's at least an inconvenience.
Got it.
This is another update to my long-running attempt at predicting the outcome of the Russo-Ukrainian war. Previous update is here: https://astralcodexten.substack.com/p/open-thread-248/comment/10104354
17 % on Ukrainian victory (up from 14 % on October 30)
I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24 without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24, or c) return to exact prewar status quo ante.
45 % on compromise solution that both sides might plausibly claim as a victory (unchanged).
38 % on Ukrainian defeat (down from 41 % on October 30).
I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.
Discussion:
Since previous update, we have two major developments, which, however, contradict each other with respect to the future of the war.
First, there is great, albeit not unexpected, Ukrainian victory at Kherson. I should note that in the comments to previous update, someone argued that, actually, Russian retreat east of the Dnieper would be bad for Ukrainians, in the long-term. I respectfully disagree.
Second, it looks like Democrats will lose their majority in the US House of Representatives. As of now, Republican victory is still not certain, but it is highly likely. Annoyingly, 538 froze their forecast just before the elections, but from the tone of their coverage I am guessing that now Republicans have better odds than before the elections, and then they gave them 84 % chance to take the House.
I don’t think that Republicans would simply block further aid to Ukraine; pro-Ukrainian bipartisan majority will be imho still pretty solid. Republican victory however makes less likely further substantial increases in the direct US aid to Ukraine and indirect aid in the form of sanctions (btw. I don’t buy “actually, sanctions help Putin”, argument). And, if Ukrainians won’t either moderate their war aims, or be faced with some major disaster on the battlefield or in the economic sphere, I think long-term political sustainability of current levels of Western aid to them is highly questionable.
*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of 2022, that is.
I want to get a sense of what are your assigned probabilities on condition of a) continued Western support; b) Russia not going for the nuclear option; c) continued Western support AND nuclear weapons not being used
Because in my mind, conditional on Western support and nuclear weapons not being used, only a few days into the invasion when Russian military dysfunction and Ukraine's capacity to fight and win pitched battles had become apparent, I had already made a significant update(*) towards some definition of Ukrainian victory, enough to make that the most likely outcome since Ukraine could already stop the Russian advance, and prolonged war would only give them an advance in force quality (just as we have seen). In other words, is the difference in our probabilities about military performance, or externalities?
*) Unfortunately I never wrote down explicit predictions pre-invasion so I don't know how big (my high confidence on invasion happening is well-documented however), trying to reconstruct my view based on what people I consider clear-thinking and who I would likely have listened to had written and said, I probably expected very costly and limited Russian victory and successful Ukrainian guerilla campaign making long-term occupation untenable
So, I am going to start with the easier one, nuclear weapons. My estimate is already based on an assumption that Russia will not use nuclear weapons. Probability of their use was imho always very low, and after the events of a last weeks it became negligible.
There are some scenarios when nuclear use would make sense for Russians, like if Ukrainians would be given powerful long-range weapons and would start to bomb Moscow, or something like that, but those are exceedingly unlikely. Otherwise, I think that if Russia started nuking now, chances of Ukrainian victory would drastically increase. Ukrainians will not surrender just because someone dropped one or two nukes on them, and international retaliation for such an action would be devastating, for Russia.
With regards to Western support, just to be clear, I do not think that Western support to Ukraine will be ever completely cut off (during the war, of course). Key question, however, is how MUCH will Ukrainians get, in the future. My dodging non-answer is that if Western support to Ukraine is going to be higher than I expect, odds of Ukrainian victory would be higher than those that I had given above, duh.
My mental model of the future of Western support, after seeing the results of US elections, is something like this: there will be slow erosion over time, unless Ukraine starts losing badly, in which case West would probably do something to prevent them from being completely destroyed. I recognize that this goes against conventional wisdom which still seems to be that Ukraine needs to „prove“ it can win, in order to get support; that was perhaps true for, like, first two weeks of the war and people have failed to update since then.
If there is no erosion of Western aid (including indirect aid in the form of sanctions) over time, that would mean my model is wrong and odds of Ukrainian victory are higher, but I find it difficult to give precise estimate.
Many republicans in the House have voted in favor of Ukraine aide so a slim minority probably doesn't impact things if a bill gets to a floor vote (maybe they can hold things up in committees).
What's even the scenario for a Ukrainian defeat at this point? The West completely abandons them? That's the only way I can see it happening.
As long as the support keeps coming, a defeat seems inconceivable to me, given the atrocious performance of the Russian army.
West's support is widely believed to be conditional on Ukrainian performance. If Putin took Kyiv in a week as he expected there wouldn't have been that much support.
If economic nosedive continues and gets exacerbated by Russia succeeding destroying key infrastructure as it's now attempting to do, it's unclear for how long the army will be able to perform.
I don't think West currently supplies, or even will be able to supply, everything the UAF needs to sustain operations if there were no domestic economy to prop it up. I'm not even sure we're supplying the full range of munitions rather than just what feeds higher end western equipment (while their workhorses are old soviet/russian stuff).
This is a conventional wisdom, but I think now it is almost 180 degrees from the truth. Perhaps for a few days in February 2022, there was doubt whether Ukraine would follow the Afghan path and thus reluctance to support them for that reason. But now it is clear that West does not want Ukraine to fall. Whether it wants Russia to lose, that is far less clear.
So I actually think that Western support would be increased if Ukraine would suffer another defeats, but West would be reluctant to support Ukraine in e.g. reconquest of Crimea, not to mention invasion of Russian territory.
>West would be reluctant to support Ukraine in e.g. reconquest of Crimea, not to mention invasion of Russian territory
that I agree.
I'd even say Minsk 2 but with EU path and some strong security guarantees short of NATO (including western troops deployment?) right now in exchange for donetsk/luhansk would be pretty good?
I agree with Walter Russel Meade from recent CWT that real Ukraine victory is about "membership in the West"/some measure of safety from Moscow. What is doubtful is whether Russia will tolerate anything like that (given implicit goals of destroying ukrainian state/guaranteeing no west there), and if so how exactly is the eventual deal supposed to look like. It's not like Ukraine will buy Minsk 2 to be followed by another war few years later..
If you've seen anything great on the likely shape of the settlement pls link :)
The West's support for Ukraine did not waver in the roughly May-August period when Ukraine was slowly but steadily losing ground in the East and stalemated elsewhere. Rather, Western support for Ukraine *increased* in that period, with the delivery of HIMARS, Harpoon, HARM and lots of artillery.
There is presumably some level of low Ukrainian performance below which the West would say "why bother with this lost cause", but I don't think there is any realistic possibility of Ukraine suddenly failing that badly. The best the Russians can hope for at this point is stalemate, and we've seen that the West is willing to support Ukraine through a stalemate.
Including support for their domestic war economy. To the extent that Ukraine is still producing weapons and ammunition for this war, those factories are going to remain open. There's not a whole lot the Russians can do to stop them - just the partial disruption of the Ukrainian power grid has consumed an unsustainable number of Russia's small reserve of modern precision deep strike weapons, and the West can easily ship in generators to keep those factories running.
Nuclear weapons could presumably defeat Ukraine at any point. Although unclear if 'victory' for Russia would also occur in this scenario.
Support may not be eternal. First it's not clear that support will extend to trying to get Crimea back. Secondly it's not clear that the West has the weapon stocks to keep on supporting Ukraine (munitions for artillery in particular seem to be an issue).
Do you define an Ukrainian defeat as a) and b) and c), or as one of a-b-c ? Because it suddenly occured to me that most of the difference between your estimates and mine lies in what would be an Ukrainian defeat (or victory). For example a)+ not b) + not c) would count as a victory for Ukraine to me if not c) means joining either NATO or EU.
Good question, it is a bit of both, I guess.
To be more precise, imho Ukrainian defeat would be either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s), or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO.
So it is "or", BUT I would count fulfilling one of these conditions as an Ukrainian defeat only if Ukraine would not get anything important in return (yes, the word "important" introduces some wiggle room). If e.g. Ukraine agrees to renounce its claim to Crimea but gets NATO membership, I would count that as a compromise.
I try to base my resolution criteria on what Ukrainians themselves say they would count as a victory, not on some "objective" standard. And they absolutely say that they very much want Crimea back, and I see no reason to doubt their sincerity; I know that not only from political statements, but also from talking to common Ukrainian people.
With that in mind I think I agree with your assessment - getting back Crimea in particular seems really unlikely to me...
It seems harsh to call the war a loss if Ukraine makes gains compared to the status ante bellum. Giving the Russians a thorough spanking and reclaiming Donbass but not Crimea is surely not a *loss*?
Getting back to this, Zelensky helpfully released list current of Ukrainian war aims (here, in English: https://www.president.gov.ua/en/news/ukrayina-zavzhdi-bula-liderom-mirotvorchih-zusil-yaksho-rosi-79141). It should be noted that only points 4 to 7 and 9 are really war aims, rest is more like intermediate steps.
Here it is in the form of numbered list: https://news.yahoo.com/president-zelenskyy-10-point-peace-094800133.html.
There is a difference between "not-reclaiming Crimea" and "renouncing claim to Crimea for the future". If there would be a ceasefire under which Ukraine would get back all its Donbass territories and Crimea would remain disputed under de facto Russian control, that would be of course great Ukrainian victory.
But, if there would be a peace treaty, under which Ukraine gets back de facto control over Donbass in exchange for formally transferring Crimea to Russia, is it an Ukrainian victory? My impression is that Ukrainians would not think so. Zelensky would not be lauded as war winning hero if he would agree to that. Ukrainian politics and policy since 2014 has been dominated by the goal of getting all occupied territories back, which is of course completely understandable and legitimate goal, given that Russian occupation is blatantly illegal.
But imho you misunderstand the dynamics of the conflict if you think that shooting would necessarily stop if Putin would propose that Russia is ready to renounce any other claims on Ukraine in exchange for international recognition of Russian sovereignty over Crimea.
I agree with his probability estimates, not with the words he uses to describe them.
But he's not estimating the probabilities of the outcomes most of us care about. His "Russian victory" and "compromise" categories both include outcomes that I would consider substantial Ukrainian victories along JohanL's lines. They also include actual stalemates and Russian victories, of course, but with no way to distinguish between them, who cares?
It's as if, the day after Pearl Harbor, someone published a probability assessment of the Pacific War that read "5% Japan invades and conquers the United States, 95% Japan is defeated, where Japan counts as defeated if they fail to conquer the United States". True as far as it goes, but that 95% includes everything from Hawaii demilitarized and the East Asian Co-Prosperity Sphere solidly recognized as Japan's unchallengeable domain, and the Japanese language being spoken only in Hell.
Most of us would care very much about those extremes and the range of possibilities in between, which means we don't care about the estimate that lumps them all together.
Well, as a Catholic who had to learn about (yet another) sexual abuse scandal last week and has good reason to think that the local hierarchy is hopelessly corrupt let me tell you I share what you feel...
Everyone involved in or around crypto is fraudulent. If you accept this as a fact, then it becomes a lot easier to not feel betrayed when they are proven to be frauds.
Indeed. It feels a little like being gaslit to read suggestions that no one saw this coming. The warning signs are there for the whole sector, regardless if people were raising the alarm about FTX in particular (and some people were doing this).
I myself have taken a week's pay from a crypto firm for consulting work. My impression was that the people involved weren't fraudsters. They were genuine, earnest, naive, seemed to have pretty high IQs, and were also unfathomably stupid. I didn't return the money but I ran at the first opportunity and feel some lasting discomfort that I allowed myself to become involved even if briefly with a sector I am profoundly sceptical about.
Effectively Unethical: support for crypto-based organizations in any form which have no audit transperancy or which do not conform to generally accepted accounting principals
Even if *possibly* not 100% true, it's absolutely the rule of thumb you should use.
My feeling too, at an admitted distance.
I would trust Vitalik with my life <3
Wow... In many ways I have no idea what you are kvetching about. Crypto has collapsed and some people are suffering? There are so many suffering for other reasons around me, why should I care about crypto millionaires? (I find a lot of EA stuff kinda dumb, give locally.)
Likely most FTX users were not millionaires. A lot of people used it as a bank (crypto generally positions itself as an alternative to traditional banking, and FTX for example offered a Visa debit card and ~8% interest on deposits).
I had a couple paychecks' worth in there. Fortunately I read the last open thread where I found out about the trouble it was having and was able to pull my stuff out before it fully fell apart.
What currency was your deposit in?
My fiat currency was USD, but the majority of my balance was Dogecoin I'd bought for fun back in 2014.
(Admittedly while I initiated the USD withdrawal, I doubt it will complete. 90% of the crypto I was able to move immediately though; the exception was TRX for which I was not able to get a wallet address in time.)
Were you getting 8% on the USD or the Dogecoin or both? Presumably the Visa debit card would draw on the USD?
On both fiat and crypto. There was a threshold above which one's deposit would receive a lower return, but I wasn't near that level.
As for the card, I didn't have one myself but they advertised that it exchanged crypto for cash at point of sale. (This appears to work similarly to the cards advertised by other exchanges like Coinbase and Binance.)
So with the debit card, you presumably wouldn't know (without doing your own calculation) the crypto cost of the purchase at the time of the transaction? I suppose that's no different from buying an item in $ using a £-denominated account, except that intra-day movements in USD:GBP tend to be modest.
USD inflation is currently 7.3%, so a nominal interest rate of 8% is only 0.7% real, which I can understand might be seen as modest, but it's still well above commercial rates, so how did you understand the money was being generated? Is it that FTX was (or was understood to be) lending the deposits out at some higher rate of interest, like a bank? If so, who are the borrowers?
Or is the threshold you mention at quite a low level? Barclays currently offer a savings account which pay 5.12% on balances up to £5k (for their current account holders only). Clearly they lose money on this, because they're paying 2.12% above base rate, but that would equate to at most £106 pa. I highly doubt they will maintain that offer long term, so it's a modest cost the bank pays to acquire customers, just as other banks offer a £200 sign-up bonus.
Sorry for asking all these questions, but I don't really have much sense of how FTX was intended to function.
Hmm OK, 8% interest would raise questions with me... I'm getting ~0%.
Having just reread "The Moon is a Harsh Mistress" tanstaafl.
It's in the ballpark of what I've had from high-yield savings accounts with traditional institutions in the past (which admittedly slumped for a while but are apparently ticking back up to ~3.5%–4% these days).
It was also somewhat on the moderate-to-restrained side for what crypto enterprises often promise—Anchor was returning more than double that around the time I signed up with FTX, but that _was_ obviously unsustainable and fell apart a little while later.
Yeah, but you’d also be banking the increase in value of your crypto as an unrealized gain, right? What was the cost basis upon which they paid you 8% interest?
My understanding was 8% in terms of each currency or coin itself, irrespective of its relation to the dollar, and the tally of interest paid out to me in each coin was in the ballpark of that. But I don't have the exact numbers or the fine print handy to confirm.
Ok, I get it. If it is at all like you describe it, that is a pretty swift piece of arbitrage on the part of that exchange, if it runs in their favor. Which apparently it doesn’t anymore. I’m sorry for your loss. I hope it doesn’t put too big a dent in your peace of mind. my wife and I just got completely screwed by a common- or garden-variety building contractor. You don’t have to be a crypto genius to screw somebody. Pertinent to this situation and a lot of the preceding thread (re can people be wrong about their own experiences) I am deeply wondering whether he really meant to screw me or whether he was just deluding himself in some way that I got hurt by. I am experiencing all the self recrimination that others here are experiencing about whether or not I should’ve seen it coming. I reckon anything I take on faith is my responsibility,; I could’ve done a lot more due diligence before I hired the guy but for all kinds of reasons I didn’t. I’m still sifting through that bag of old clothes. you’re not alone.
I also question how effective a lot of SBFs donations have been. Dozens of millions of USDs to political campaigning in America sounds about as effective as any average charity.
On a different note, I write a newsletter where I share three interesting things once a week: https://interessant3.substack.com
>"The end does not justify the means" is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn't think this way.
That's a very roundabout way of saying "consequentialism is self-defeating, so become a deontologist".
A lot of people accepting this would merely become rules-utilitarians instead.
Problem with deontology is it is pretty much silent when it comes to what kinds of good acts to do. Give five dollars to a beggar, or give it to Givewell?
Is the beggar in front of you? Give them the five dollars.
Are you at home or elsewhere? Donate to Givewell if you think they will make a better use of it.
Isn't that just newtonian ethics, as per https://slatestarcodex.com/2013/05/17/newtonian-ethics/ ?
I never said *I* thought you should become a deontologist, but I also don't see the problem with answering "either is permissible".
any preferred explainers to recommend?
That’s a not very roundabout way of being binary.
And...?
People are not binary. Outside philosophy departments there are no deontologists or consequentialists.
Eliezer has said he thinks people should be 75% of the way between deontology and consequentialism. I might quibble with the actual numbers but I think that's a good way to think about it.
Realistically, I think, the real issue here is weaponization. As soon as a rule exists for how you are supposed to give to charity, we can use that (just like we use everything else) to demonize any person or group we don't like...
The only out is to condemn weaponization per se. I have my theory of how I want to give, you have your theory, and let's both try to be decent people according to our theories...
Explaining why you think your theory is optimal? OK.
Explaining why my theory is not optimal? uhhh...
Insisting that there's only one honorable theory? Yikes, that crosses red lines.
What do you mean by weaponization?
Weaponization is when the primary use of a moral principle becomes to attack *others* rather than as a guide to oneself.
It's basically the fate of every "bright-line" moral principle; the first generation (self-selected, by definition someone unusual people) pick up the principle as a way to live better, a later generation (mostly "normal" people with the normal social instinct of caring mainly about an us and a them and hurting the them) switch to using the moral principle not as a guide to their lives but as a way to attack others.
We are living through this right now with "Woke". But it's happened many times before. Look at the Reformation. Look at the Cultural Revolution. Look at Stalin's regime. Look at Donatism.
It's going to be super hard to come up with a theoretical foundation for this, though. Most ethical theories start with a theory of value, and trying for a blend here won't be easy.
was just discussing in another context: mono-paradigm thinking is in general a very bad idea. please don't. use a mixture. each part has some good arguments for it so it's fine. meta-paradigm is that no argument can be complete and so no mixture weight can really be zero.
I think meta-ethically this is just rule utilitarianism, although I agree that doesn't help too much in figuring out exactly which rules to follow.
Sure, but rules-utilitarianism is pretty different from deontology. Rules-utilitarianism still accepts the value-theory of utilitarianism, and merely proposes that attempting the ethical calculus won't work. Deontologists will still be apalled.
My opinion is that two-level utilitarianism is probably the most sensible - we accept established standards and rules of thumb in most cases (aware both of the difficulties and the risks of bias in doing the actual calculus), and only break out the attempts at hedonic calculus when we feel that they're insufficient in some way or when we're lead to question the rule of thumb.
The plot of A Christmas Carol has ascended to become a pretty standard stock plot (https://tvtropes.org/pmwiki/pmwiki.php/Main/YetAnotherChristmasCarol). What other plots created in the modern era have done this? TVTropes suggests It's A Wonderful Life, but I think that's a bit different - creators usually just copy the device of a character being shown what the world would be like without them, rather than the entire plot.
Seven Samurai
Besides A Christmas Carol, Gift of the Magi is another obvious stock plot (though nothing is near as ubiquitous as ACC). I'm surprised noone as mentioned it yet.
"Modern"
It's from 1905.
Seinfeld is in there. Pretty much everything in the Seinfeld category.
https://tvtropes.org/pmwiki/pmwiki.php/Main/SeinfeldIsUnfunny
Moby Dick comes to mind, as a big influence on things like Jaws and Master and Commander; the Far Side of the World.
Maybe Crime and Punishment? Feels like movies following it are usually dark comedies, but Columbo was heavily inspired by it.
I am not really sure this is an example of borrowing a plot to create any work. It looks much more like taking a plot that has a particular setting in the original and reimagining the setting I mean, would you say setting Shakespeare’s play Julius Caesar in the fascist militia of the 30s is like a new version of the plot of Julius Caesar? In the purely schematic reading, there aren’t really that many plots it’s in the details that they acquire their individuality.
I can’t remember who it was, but someone did propose that there were really only two plots; someone leaves home or a stranger comes to town.
Borges suggested there were really four plots: the siege of a city (Iliad), the return home (Odyssey), the quest (Argonauts), and the self-sacrifice of a god (Attis, Odin, JC superstar)
Id like to see an analysis of Law and Order plots and how many are the exact plot with different minor details. They probably have a dozen templates or so that just get rotated through.
Now I'm stuck trying to imagine what a Law and Order Christmas Carol episode would look like.
Die Hard. TVTropes calls it "Die Hard on an X": https://tvtropes.org/pmwiki/pmwiki.php/Main/DieHardOnAnX
Air Force One? Die Hard in a Plane. Under Siege? Die Hard on a Navy ship. There's an apocryphal story that this eventually came full circle, when someone who hadn't seen the original pitched his idea as "Die Hard, but in an office building."
This is a fascinating thread. I assume everyone here is right, I certainly haven't consumed a sufficient fraction of world media to say otherwise, but I've never seen a plot imitation of either A Christmas Carol, Cyrano de Bergerac, Groundhog Day, or Rashomon. Numerous film versions of the first two, for sure, and tons of Shakespeare burglaries, of course. Same with the Connecticut Yankee plot, that one's ubiquitous in SF and fantasy.
There's a modern-setting version of Cyrano called Roxanne.
I know about Roxanne, but to be frank I see it as the same category as "look how creative I am, it's The Tempest but I put people in suits" and Baz Luhrmann's Romeo and Juliet: it's not an imitation, it's just the original recostumed.
I love Cyrano, de Bergerac, and I love Steve Martin, but I really hated that movie
Roxanne is a very fun adaptation of Cyrano. And I've seen the "smart ugly guy feeding lines to the good looking dumb guy who's wooing the woman they both love" scene played out a lot of times in different places.
The plot of Groundhog Day is essentially a person is doomed to repeat the same mistakes/ go through the same motions/ endure the same torments over and over again, until some realization or a change of heart or intervention allows them to escape.
I could argue that it is exactly the same plot as the myth of Prometheus or Sisyphus, except he never gets the payout
Groundhog Day is interesting in that there's two strains to it. There are stories like Groundhog Day or Palm Springs where the protagonist is trapped in a timeloop and the goal is to stop looping, but then there are stories like Edge of Tomorrow or Source Code or Re:Zero, where the timeloop is a tool the protagonist is using to solve some other more action-y problem, using their foreknowledge and infinite retries to explore all the angles until they find a way to win.
I suspect that the latter type owes as much to video games with the ability to save and reload as it does to Groundhog Day itself.
Rashomon Was largely based on a Japanese short story called “In the Grove” published in 1921. One of the influences for that story seems to be a long poem by Robert Browning, called The Ring and the Book, which he based on the written account of a real crime that took place in Italy in the 1600s.
Groundhog's Day, which was actually done much earlier in Defence of Duffer's Drift.
I’m not sure it’s there yet but Mamma Mia might get there eventually
Freaky Friday.
The original was 1976. I am unaware of an earlier body swap movie but they may have been one.
OMG, the internet is bountiful:
https://www.gq.com/gallery/best-body-swap-movies-films-list-history
The first one I can think of is actually a book: Vice Versa or A Lesson for Fathers, by F. Ansley, published 1882.
I agree that Vice Versa is the best answer, but merely for the sake of trivia, I want to point out a humorous poem, “Grandpapalittleboy," from D'Arcy Wentworth Thompson's 1864 volume Nursery Nonsense, or, Rhymes without Reason. Much like Vice Versa, Freaky Friday etc., the poem revolves around the absurdity caused by an adult and chid switching bodies, and begins:
Last night , when I was in my bed,
Such fun it seem’d to me;
I dreamt that I was Grandpapa,
And Grandpapa was me.
But the poem is so slight, that I second Vice Versa as the urtext, and only mention Thompson as a colorful aside.
Rashomon.
The Hangover (person wakes up in highly unusual / plot hooky circumstances with no memory of the immediately preceding events, must piece together what happened while also facing some external deadline)
The Bourne Identity works as a non-comedic version of the Hangover/Amnesia thing
I think Dude Where's My Car predates that. Perhaps other similar stories predate that.
It reminds me of a W.C. Fields joke from The Bank Dick.
Fields: Bartender, did I spend $20 dollars here last night?
Bartender: Yes, you did!
Fields: (Wipes brow with handkerchief). Whew! I was afraid I'd lost it.
In case anyone's interested, inflation calculator says that would be $415 today.
I think Twain invented the concept of "time traveller uses modern science to defeat enemies and change history" in A Connecticut Yankee in King Arthur's Court. It pops up all over the place now,
A Christmas Carol was 40 years earlier. Although Scrooge didn’t interact with the past he time travelled to the past, and to two futures. Neither future happened so it was effectively a parallel universe story.
Cyrano de Bergerac
Hero's Journey.
(Yes, it's created in the modern era...)
Are you arguing that Campbell was wrong and invented the plot himself?
Campbell was *absolutely* wrong. Classic example of filing off inconvenient parts of your data to fit the thesis. There's also a bunch of astrology-style vague statements that are easy to fit to lots of different cases, such that you end up with a theory that seems insightful but actually means nothing.
But the resulting plot he came up with is pretty great, no? Or would you argue that Star Wars isn't really based on it?
I know Lucas talked a big game about it, but I wonder whether it's really more justifiable to say "Star Wars is based on Campbell's monomyth" than "Star Wars is based on The Hidden Fortress" or "Star Wars is only good because Lucas' wife at the time and a number of other prereaders heavily revised his script", or even "actually Star Wars is not good except in the sense of being visually spectacular, even Harrison Ford hated the dialogue".
I'm honestly a lot less invested in/knowledgeable about Star Wars than one is meant to be as a dork, so I don't know at all in what proportions which of these claims are really justifiable.
Yes.
Interesting.
It’s hardly an original idea on my part.
https://tvtropes.org/pmwiki/pmwiki.php/Main/WholePlotReference may be what you're looking for. (Though for some reason it doesn't seem to list _Fantastic Voyage_.)
How does one protect oneself against vice and the inherent corruption of power assuming one wants to be a force for good. Is the model that the FTX folk started noble snd then went off or that it was always or for a long time a grift beginning perhaps with the very concept of earn to give
A solution I've seen in the technology space is "give people the ability to walk away from you if they feel you've become tyrannical".
Vitalik Buterin (creator of Ethereum, the 2nd largest cryptocurrency) gave away the overwhelming majority of his wealth during the 2017 cryptocurrency spike to a combination of GiveWell and the Ethereum Foundation. This bought him a lot of goodwill within the Ethereum development community, but critically made his continued influence within the community contingent on maintaining that goodwill.
Vitalik's suggestion to SBF (and others, obliquely) was basically "you should have given away your money as soon as you got rich". https://twitter.com/VitalikButerin/status/1590473524220952577
Similarly, a number of open source projects are led and maintained by a (I swear this is a real designation) benevolent dictator for life. https://en.wikipedia.org/wiki/Benevolent_dictator_for_life
Crucially, in all the open source cases, there's *nothing* keeping the entire community from forking the whole project and shutting out the benevolent dictator, save a group consensus that the dictator is doing a good job of leading the project.
Put yourself underneath a good governance structure that you don’t have the power to alter or abolish.
few people of the inner circle, barely in their 30s, on drugs daily, living on bahamas in some shared apartments entangled in the web of polycules, with access to billions $.. should have never raised any concerns right ;)
So, I have a bit of an inferiority complex generally and maybe a chip on my shoulder where I assume people in these companies have to be brilliant (my hope is that here counteract and I come across as normal) but… I work in a normie mainstream bank that does stuff in dollars. I have been audited harder over whether or not we inappropriately charged someone $0.11 than these guys apparently were over billions of dollars in capital. I have pretty broad authority to do some stuff but definitely not everything I can imagine and I also know I will eventually have to explain myself to somebody else and there’s always somebody looking over my shoulder. Like, it’s painful to have to share report coding with some auditor who has no context or idea of what you’re doing or how it really works but you have to do it so that it turns up stuff like this. Just the idea that the founder is the only person able to alter the source code fails a lot of basic maker/checker stuff and that would have come up on a really low-fidelity review deployment review. I would have quit before I passed an audit that didn’t have that as a finding.
"I have been audited harder over whether or not we inappropriately charged someone $0.11 than these guys apparently were over billions of dollars in capital."
Ditto here in my time as minor government bureaucratic minion, where the auditors came annually to look over the books and we *did* have to have an explanation for "the quote was for €30.00, the invoice was for €30.50, why did you pay that extra €0.50?". I did an accountancy course where a lot of the class were also public/civil servants, the guy teaching the course came from private industry, and when one of the class was out by about €5,000 on a sample problem he said "don't worry, this doesn't matter in real life" and we all laughed and explained that in our day jobs we had to account down to the last cent, because dealing with the public money is not like private companies.
Part of the problem was that the people involved were all well-connected, so it was no problem to get Dad to ring his old buddy in the SEC and get him to let you decide what should go into the regulations.
I also wonder if there might have been a status anxiety thing at play. No one making a “mere” six figures wanted to challenge the billionaire wunderkind. I just can’t see how this wouldn’t turn up otherwise.
I’m relatively new to this blog and the broader EA ecosystem, so I think I’m missing some background knowledge/philosophy about prediction markets.
The idea of crowdsourcing whether or not to trust a public figure strikes me as extremely unreliable, like you’re explicitly opting into trusting mob rule and potential mania over your own judgment. Every day on Twitter, someone’s reputation gets temporarily ruined over nothing, and it seems like the entire point of trusting someone is that you wouldn’t change your opinion on a dime just because some anonymous people are freaking out.
If the idea is that putting money on the line guards against this kind of mania, I guess I’m skeptical. We see irrational exuberance in more liquid markets all the time. And in the case of someone like Vitalik Buterin, it seems like you could already bet against his trustworthiness by shorting Etherium; if you don’t think current market dynamics reflect his trustworthiness, why would you expect this much less liquid market to give you more insight?
I’m sure this isn’t some brand new critique of prediction markets, so is there a good write-up somewhere that explains whatever I’m missing?
Current market dynamics reflect our civilization's collective best guess about Buterin's trustworthiness, and likewise they did regarding SBF's. Yes, it turned out that it was wrong, but people who knew better could've made money on their knowledge. Given that they didn't, they don't get to say that they knew better. The whole idea is not so much about getting perfect insight, it's for people to put their money where their mouth is, so that in the future we can easily judge their track record.
If I knew someone "trusted" me because I was performing well in a prediction market, I would not feel at all trusted by that person.
I am halfway done with a prediction market FAQ, so I am going to avoid answering this in the hopes that the FAQ answers it better later. Other people can preempt me if they want.
It is curious to me that everyone is trying to blame SBFs philosophy to what he did.
The closest examples to individuals that did what SBF did were the rogue traders (Nick Leeson of Barings Bank, Jerome Kerviel of Societe Generale, and Bruno Iksil of JP Morgan are three biggest ones), yet I don't think anyone tried to say that Irish Football was linked to Nick's reasoning in doing what he did.
So it's interesting how much EA is being mentioned in the conversations of SBFs motivations to do what he did.
"So it's interesting how much EA is being mentioned in the conversations of SBFs motivations to do what he did."
I don't know if this is rubbing salt in the wounds, but a reminder of Scott's endorsement of Carrick Flynn for Congress:
https://astralcodexten.substack.com/p/open-thread-217
"The effective altruists I know are really excited about Carrick Flynn for Congress (he’s running as a Democrat in Oregon). Carrick has fought poverty in Africa, worked on biosecurity and pandemic prevention since 2015, and is a world expert on the intersection of AI safety and public policy (see eg this paper he co-wrote with Nick Bostrom). He also supports normal Democratic priorities like the environment, abortion rights, and universal health care (see here for longer list). See also this endorsement from biosecurity grantmaker Andrew SB.
Although he’s getting support from some big funders, campaign finance privileges small-to-medium-sized donations from ordinary people. If you want to support him, you can see a list of possible options here - including donations. You can donate max $2900 for the primary, plus another $2900 for the general that will be refunded if he doesn’t make it. If you do donate, it would be extra helpful if the money came in before a key reporting deadline March 31."
Who else was throwing Big Money at the campaign?
https://www.cbsnews.com/news/crypto-billionaire-sam-bankman-fried-funds-house-candidate-carrick-flynn/
"In the crowded primary for the newly created 6th Congressional District in Oregon, first-time candidate Carrick Flynn has attracted over three times as much outside spending as any other House candidate this year.
The lion's share, over $10 million, came from the super PAC Protect Our Future, established by 30-year-old crypto billionaire Sam Bankman-Fried, the founder of crytpocurrency trading platform FTX.
Flynn and Bankman-Fried are both members of a philisophical movement known as effective altruism — as part of a network of researchers and philanthropists, they're dedicated to working on the truly big threats to the future of the human race: pandemics, climate change and nuclear weapons, for instance.
...If Flynn does pull out a primary win, he could be the first effective altruist in Congress, and effective altruists are playing a very long game. Their "30,000-foot perspective" takes an exponentially lengthier view. One essay associated with the movement calculates that if the human species lasts as long as the average mammalian species, around 100 trillion more of us could live and die over the next 800,000 years. Some members are debating what the "foundations for space governance" should entail — not exactly topping any issues polls with voters anywhere.
Bankman-Fried was practically born into this realm of thought; his parents are both Stanford law professors with an interest in utilitarianism. He went to MIT, and at the beginning of his career, he worked at the Centre for Effective Altruism. By the time Bankman-Fried celebrated his 30th birthday, he had built a crypto exchange with a multi-billion market capitalization.
...Bankman-Fried has argued that more effective altruists should steer themselves toward making a positive impact on U.S. policy and he was one of the top two contributors to Biden's campaign in 2020, just behind fellow billionaire Michael Bloomberg. This April, when he appeared on a podcast that's big among effective altruism devotees, he was bullish about the power of outside spending in primaries: "The amounts spent in primaries are small. If you have an opinion there, you can have impact."
So the associations between EA and Bankman-Fried were being made from the start, this is not simply "Amongst the other things he threw money at, like sponsoring sports, he made donations to this charity".
Nick Lesson became a commercial director of Galway United ten years after Barings. So there’s clearly no relationship there between his illegal acts and his subsequent career.
I think most of it's that non-EA people think of EA as being very high-and-mighty and presumptuous, since the core of the philosophy is saying that actually you *can* directly compare charities with one another; and after deciding which ones are best, you donate to those.
This is implicitly very unflattering to (as a first approximation) everyone who's interested in any other charity or cause area.
Nah, they think that EA is presuptious not because the notion that one charity might be objectively better than another is inconceivable to them, it's because they dismiss the idea that some outsider nerds could possibly be authorities on this.
Yes! This.
funny you don't apply the same argument to say college (or employers?) rankings
Oh, I'm not defending that perspective. I actually think EA is great as applying basic cost/benefit analysis and stack ranking to charitable causes.
But I think that perception is why it takes flak.
To my knowledge, none of the people you mentioned crafted an extensive public persona about being the Philanthropy Billionaire who earned money to give it away, ran massive pseudocharitable foundations etc.
Exactly. Yet they did something similar to what SBF did. And we dont know the extent of the damage, but FTXs losses could potentially be less than the losses they incurred adjusted for inflation.
So why do we blame SBFs Philanthropy persona and not Leeson's Irish football coach persona?
Because Nick Leeson did not do what he did in order to get a job with Galway United, nor did he have public interviews talking about his love of League of Ireland football and how this influenced him into wanting to make tons of money to support it, nor did Leeson get flattering puff pieces about how he was a millionaire philanthropist.
Because SBF's philanthropy persona is all about acquiring large sums of money and redirecting them to where SBF subjectively thinks they do the most good, apparently even if that means stealing billions of dollars. I don't think that soccer is remotely comparable.
Also from what I can tell he didn't become a soccer CEO until after he was out of jail for the fraud.
I agree that EA should not be blamed, but it’s not crazy to say that his overall worldview, and potentially that of most of defi, is to blame. The rogue traders that you list did what they did despite and in flagrant avoidance of dozens of external regulatory and internal risk controls, which is clearly not the case here. The philosophy of the institutions about risk and compliance in the two scenarios is vastly different.
There is a lot of Chesterton’s fence demolition that is central to defi, and a reevaluation if not outright rejection of a lot of what have become first principles in traditional finance. So when the question is asked “should we steal from client accounts to support our white knight rescues of the future of finance?” it’s a lot easier to end up in the cost-benefit failure mode that Scott identifies rather than the clearly correct deontontological answer.
I want to write an article about this, but I think the thesis will be something like:
There is nothing particularly valuable about crypto except as an unregulated sub-economy for people worried about badly-intentioned-authoritarian or well-intentioned-regulatory interference in the regular economy. If you agree that this escape valve is a good thing to have, you should support crypto (and want it unregulated, and accept that any specific project operating on it is plausibly a scam). If you disagree that this escape valve is a good thing to have, you should ban crypto. There's just no reason to want crypto to exist but also regulate it, and I don't understand why everyone thinks this is the reasonable compromise solution.
The premise ("there is nothing particularly valuable about crypto except as an unregulated sub-economy") does not seem to me to be widely accepted.
Nuanced premises rarely are. The widespread attitudes toward crypto are "it's all scams and terrorists and drug dealers" and "don't give in to FUD, crypto is still the clear path towards a glorious utopian future".
There’s a good argument for advancing crypto technology to be such a backup plan. But an unregulated financial infrastructure, even if intended only for good uses, is likely to be overwhelmed by actors who just want to take advantage of the unregulated nature of the system. That in turn would bring about calls for regulation. Maybe the best chance for an equilibrium state is one that is regulated for the most part, but projects that carry the original benefits survive on the fringes as long as they don’t get too big and attract much attention.
It is worthy to note that the valve has existed for decades (in the form of suitcase finance), but regulation around AML and / TF has closed that valve considerably, right around the time crypto became more mainstream.
I think "have less regulation" is going to very naturally be linked to "have more scams", unless regulation is literally useless.
I agree that we should be trying to change that, but I think that looks like better/more interesting crypto technology to make things that are both hard-to-regulate and hard-to-scam, and that "support better/more interesting crypto technology" looks a lot like "support crypto", plus half of the new technologies will turn out to be scams.
The particular example I'm thinking of is better/more convenient noncustodial wallets (so that your money isn't on an exchange which can collapse or defraud you), but I am stealing that from Vitalik and I don't actually understand a lot of the considerations here.
I think crypto has proven itself a bad method for avoiding regulation. If nothing else you're not really getting away from regulation. Crypto is regulated, it is just regulated by it's code rather than by the law. In essence, you're replacing the risk of the government changing the law on you, with the risk of trap clauses in the codebase. Now, if you live in a country where the government regularly messes up the economy maybe that hedge is reasonable, but it would have to be an extreme threat to match the risks of crypto.
You might be right about external regulatory controls, but I don't really agree with internal risk controls. There is no way SBF could have done what he did without serious skirting of his own internal risk controls. It would have leaked way sooner.
And FTX was never DeFi. I have never considered DeFi and neither have most people in crypto. FTX was a brokerage that allowed people to trade crypto more efficiently and using complex financial instruments. It had nothing to do with DeFi.
Early reports may turn out to be wrong, but some of the unverified rumors that have come out include that SBF was granted direct uncontrolled access to official books and records, and the ending “hack” may have been facilitated by internal technology staff. When that kind of thing happens, it’s not a skirting of internal risk controls, it’s an intentional decision not to have effective risk controls.
Yeah, lots of billionaires donate to charitable causes or arts to whitewash themselves. But nobody says that liking ballet and opera makes you evil
I'd say that preferring "classy" entertainment is a *little* sus. I'm sure there are people who actually just like it, but there are also people who think that they *should* like it because that makes them Better than others.
Why is wanting to status signal sophistication suspect?
It is something most people do
Sincerest condolences, Scott. For whatever it is worth from an internet stranger having the part of you that’s vulnerable enough to open up to other people and get stepped on is just about the worst. But you were never wrong to want to see the good in people and then for working with them to do good.
I’m not much of a joiner but this doesn’t make me think less of Effective Altruism for whatever that’s worth, as I know that’s near and dear to you. Once you have a group of people get to a certain size you can’t just filter out these kind of random human behaviors effectively. Was bound to happen, eventually.
From what I understand, SBF had complete control over the code base for FTX. I’m kind of amazed that didn’t come up in a simple audit of their governance but then from what I understand (I listen to the All-In Podcast, and read some articles on it) it sounds like a lot of the VC’s didn’t even conduct that level of diligence. It’s like he had a magical wand to touch all the money in all the FTX accounts any time he wanted to make all his other problems go away. That would be a weird question for someone who doesn’t work in finance to ask and certainly you can’t be at fault for not having asked it.
I don't think it's completely unfair to point to St. Petersburg - SBF was pretty open about it. I actually wrote a little bit about his views on it at https://astralcodexten.substack.com/p/if-youre-so-smart-why-arent-you-governor
There is a really interesting question of why billionaires keep doing business after they've made much more money than they could ever need. I think one answer is that most normal people probably don't, and retire after getting moderately rich, and the people who actually become billionaires are all weird in one way or another. I assume most of them are very competitive, or psychologically flawed, or just want to see what will happen. But wanting to donate the money could be another excuse.
I suspect that billionaires start new businesses post-wealth largely because they're interested in the projects those businesses represent, rather than just because they're hoping to make more money.
Examples where I think profit was secondary to why the company was founded:
* Blue Origin
* SpaceX
* Neuralink
* Mark Cuban's Cost-Plus Drugs
* (arguably) Mark Zuckerberg and his pet VR projects. I mean, it's *basically* a completely new business using stockholder money, mostly divorced from the core social media empire, which Mark can do because he's a majority owner of the company.
* OpenAI
A common story seems to be a billionaire saying "X technology seems like it could be badass if it was developed a bit further, but nobody's working on it in a way I approve of. So I guess I may as well use some of my absurd wealth to do it myself."
I bet that only if there's no hope of the enterprise turning a profit does the billionaire make a charitable foundation for it (since at least that way you get the tax benefit, even though you have less freedom of action than with a private company).
"Mark Zuckerberg and his pet VR projects. I mean, it's *basically* a completely new business using stockholder money, mostly divorced from the core social media empire, which Mark can do because he's a majority owner of the company."
No he's not. Zuck owns about 17% of the company. The reason he can do all this shit while the other main owners can only pull their hair out and wail is that they stupidly gave him a golden-ticket arrangement for being a genius founder, where he has voting control despite owning a palpable minority of the stock.
Yeah, that.
One of my favorite things about this place is that I get to hear about all kinds of things I didn’t already know, and that everyone here thinks a lot differently than I do.
So, on St. Petersburg, which I read as a sort of “the road to hell is paved with good intentions” but with math…
Whenever I find myself getting sucked into a paradoxical vortex like St. Petersburg, I remind myself that the universe is something like “all the principles and paradoxes that there are, competing with and running into each other in ways that are probably not entirely calculable” and also that I am not a super computer. So whenever I stumble across something that seems Deeply Troubling I look up and remind myself that the universe seems to be working okay and just because I found an interesting sort of answer to a weird problem doesn’t mean I knew the right question or context. That might be saying “don’t think too much” or “don’t be so sharp you cut yourself” with extra steps but I think it is the eject button all of us have to press at some point.
It’s reasonable that SBF probably got a lot of signal that the way he was thinking about things and focusing on those small little things would lead to success, and he kept pulling the thread until it led him off a cliff. Until you’ve really truly internalized the understanding that you personally can be an asshole, and not just like accidentally but on purpose, it’s a hard thing to avoid. My guess is he probably talked himself into all kinds of pretzels about how it would be okay.
On the other part for why billionaires keep doing it:
I think we’re all just machines that solve problems. Take away the problems and none of us know what to do with ourselves, or at least that happens after a while. That doesn’t have to be grim, by the way. Lots of those problems are things like “what’s the most fun I could have surfing today” but other people need to be productive outside of themselves. Like my roof started leaking from some storm damage the other day and I almost got excited because that means I get to go up and fix it and I’m kind of nostalgic for when I used to work construction. I like to fix things with my hands.
I’m middle class (probably poor by the standards of some people here, but I’m incredibly wealthy by the standards of my childhood so it’s about all the wealth I can emotionally grapple with) and everything about being middle class is perfect to me. It just fits like a good pair of shoes. I have a job that takes enough of my attention I don’t feel like I’m doing nothing and easy enough that I still have lots of time left for my family. It gives me a structure to set a good role model, but also I get to work from home, so one day I’ll be able to watch him run around the yard from out of my office window. If you had told me when I worked on a drilling rig that life could be that good I wouldn’t have believed you. The only thing that makes me unhappy at all is when I look at the wider world and see things that look like signs of systemic rot because then I think about how my children will have to deal with them.
I’m sure people like Elon get the same thrill from watching Teslas roll off the assembly line or when rockets shoot into space and you just get the feeling that this is the problem on Earth you were meant to solve. I do try to hold in mind that everyone is just “some guy” at the end of the day and everyone is psychologically flawed to a greater or lesser extent. I don’t think you’re ever “done” when you have a purpose like that although it can certainly magnify the level of mistake you make. We’re all processes, not products.
Anyway, SBF probably found his purpose with the crypto exchange and just lost his way.
Have there been any serious studies on who makes up the audience for interracial porn?
The obvious answer would be black men, but the way IR porn tends to portray black guys as animalistic beasts, and the emphasis on the girl's degraded state after the fact, suggests that they're not the intended audience.
Women don't tend to seek out visual porn, and when they do, I don't think it's of the "tiny girl smashed by five thugs" variety. I don't think they're the intended audience either.
Perhaps it's the American equivalent of tentacle porn, and it's for white men who like seeing white women degraded by beings they (consciously or subconsciously) view as nonhuman.
But there are a few questions here. It's usually taken for granted that men watch porn to insert themselves as the male in the video. But does this really work if the man is of a different race than the viewer? What if the main viewers are gay white men who instead actually insert themselves as the girl in some kind of sadomasochistic fantasy?
There's also the argument, popular in alt-right spheres, that IR porn is actually propaganda by certain sectors to demoralize white men. I'm agnostic on that view, and would be happy to hear arguments for and against it. Whenever I see these videos on porn platforms, they tend to have view counts high enough to suggest a fair number of people watch this stuff, whether or not it was manufactured as propaganda.
I always feel that the people who think that the main audience for interracial porn, or gay porn, or any other politicized category of pornography must be straight white Republican Christian men are just telling on themselves. The reason people qualify these theories with "strong suspicions" and "hunches" rather than verifiable facts is because these theories are usually based on nothing more than the smug desire to believe your political enemies have humiliating sexual hang-ups that further justify your own feelings of superiority. It is in its own way a form of mental pornography.
The obvious answer is probably the correct one. A lot of men do watch and enjoy porn that emphasizes the girl's degraded state, and if anything I would expect the appeal to be even greater when the girl so degraded is of a rival race.
I'm not a big fun of the idea that right leaning crowd are the main audience for interracial porn, as I already said, I believe it's mostly women. But... aren't you actually making a point in favour of this theory?
The ideas of "rival races" and that having sex with multiple men is degrading for a women are much more likely to be hold by people on the right than on the left. So your obvious answer is disproportionally pointing to the same group: white conservative men, though not necessary Christian.
Re "rival races"...
A significant part of the population has a violent, instinctual drive when seeing a woman of "your tribe" with a man of "foreign tribe". It's almost like jealousy in that it's hardwired and happens regardless of your worldview. Or perhaps, the worldview is downstream from the instinct.
Evolution is a bitch.
Well I remember feeling something like this in my salad days before I actually took the time and effort to reflect on this part of my psyche and become a better person. It's really not the case that this happens regardless of your worldview.
Sure some people can have a naturally stronger version of this feeling but I also think there is a self reinforcing pattern here based on whether you endorse and identify with the part of yourself that experiences this tribal jealosy or not. Feeling affects the worldview which itself affects the feeling and so on. As a result people with strong feeling about rival races tend to have according worldviews and thus are more right leaning.
I'm not sure. I know it's completely baseless and lowkey hate my own ethnic group anyway (out of excessive exposure to them if nothing else), but I still feel on edge whenever I see a foreigner with one of "our" women. I'm not sure one can reflect their way out of it.
I expect that you can achieve some improvements through CBT-like practises. Maybe not not a point where you do not feel anything at all but still.
But even more so, I expect that if you had thought that it's completely reasonable to feel this feeling and identified a lot with it, you would have experienced it even stronger.
I don't think you're understanding my point. The obvious answer is that men of race A are likely to find pornography where men of race A sexually degrade women of rival race B titillating, as it appeals to fantasies of domination, victory and racial revanchism. This is the instinct behind bride kidnapping and wartime sexual violence throughout history, simply sanitized and commercialized for mass consumption.
If the porn in question was about white men and Asian women, I don't believe there would be this much hemming and hawing about who it primarily appeals to. But the good readers of astralcodexten are not so gauche as to believe black men operate on the same sexual urges as all other groups of men throughout history.
I think I get your point well enough.
Do you understand, though, that such categories as "sexually degrade" or "rival races" are not objective but based on the perception of the viewer? To be actively turned on by the "sexual degrading of a member of rival race" one have to have a peculiar way of perceiving reality in the first place. And this way of perceiving reality seems to be more common among right leaning crowd which are mostly conservative white men. Thus it's reasonable to expect that they consume interracial porn, probably the one with white men and black women, but maybe the opposite configurations as well, for the sake of breaking taboo.
I think this peculiar way of perceiving reality is a lot more common than you believe. I know that nobody in this comments section ever thinks of races being in rivalry with each other, but I'm not so sure the general population shares this aversion.
Voting habits, colored by race relations, belie the fact that plenty of black men have "right-leaning" views.
If I may, you may be marginally mis-reading the thesis of this Ape in a coat here Florian. No comment is being made on the common-ness of "this peculiar way of perceiving reality". Might be no one. Might be everyone. It appears somewhat irrelevant. Instead, a comment is being made on the nature of user attraction.
Your theory ends up suggesting that right leaning crowds would favour a certain kind of porn. Which is the very thing you start out by stating your theory disagrees with. You create a loop, is what Ape is commenting on, and what caught my eyes as well.
Perhaps if I rewrite the flow a bit it becomes clearer? I find it really intriguing.
You state:
-- " I always feel that the people who think that the main audience for interracial porn, or gay porn, or any other politicized category of pornography must be straight white Republican Christian men are just telling on themselves. " --
Then you give your reasoning why this suggestion is in error. Eg, attributing unsavoury characteristics to your thought-enemies is somehow titillating, and justifies a sense of superiority. Definitively something I agree with - it is maliciously easy to assume that people one disagrees with are all secretly [Something Bad]. It's a bit of a mental masturbatory trap. I might even phrase it as -- wait, you've done it for us:
-- " [. . .]It is in a way its own form of mental pornography. " --
Then you offer a fairly reasonable alternative explanation of interracial pornography:
-- " The obvious answer is probably the correct one. A lot of men do watch and enjoy porn that emphasizes the girl's degraded state, and if anything I would expect the appeal to be even greater when the girl so degraded is of a rival race. " --
So if I may paraphrase your statements here? You note that there may be more obvious explanations for why interracial pornography appeals than the "mental pornography" of attributing a shameful sexual fascination to political opponents. It's just because some people like that kinda thing. Someone might watch a member of a rival race being degraded because it appeals to the fantasies of domination and victory. Men of Race A are likely to watch porn where men of Race A pound women of race B into the ground. Might be it appeals to some instinctive urges.
As you write:
-- " The obvious answer is that men of race A are likely to find pornography where men of race A sexually degrade women of rival race B titillating, as it appeals to fantasies of domination, victory and racial revanchism. This is the instinct behind bride kidnapping and wartime sexual violence throughout history, simply sanitized and commercialized for mass consumption. " --
The mild trap here is what Ape in a coat points out. You're saying that the obvious appeal of interracial porn is to watch members of a differing race be degraded.
Ape is noting that in order for that to appeal, someone needs to have the categories of "rival race", "degradation by sex", "interracial conflict" and a fair chunk of other associated mental models. In order to enjoy the thing, one does need to perceive the thing as existing.
And the people who tend to have those mental models tend to be straight white republican conservative men. You know, the kind of people who might be inclined to think about "rival races" and "interracial rivalry" and consider the varying the sexual urges of varying races.
So you're saying that the /idea/ that straight white republican conservative men are (one of the) primary consumers of interracial degradation porn is incorrect because they're . . . The kind of people who would be the primary consumers of interracial degradation porn.
And actually, one might even suggest here that you're saying that the "mental pornography" of ascribing "humiliating sexual hang ups" to people one disagrees with is incorrect because . . . The kind of people who would be interested in the shameful sexual hangup of raceplay is precisely the kind of people who tend to have political ideologies that trend towards politicised notions of race.
It's just a little funny, is all. You're saying the theory is wrong because of all the reasons that make it not be wrong.
I don't know, rival races in a zero-sum competition seems a lot like critical race theory!
But my guess is that those inclined to obsess upon the significance of the races of porn actors (as distinct from appreciating a bit of variety) are likely to fall in a pattern compatible with horseshoe theory.
Other people have already dealt with the weirder explanations, and pointed out that women (including lesbians, oddly) can be into interracial porn. I think a big market is white men, though, and I have strong suspicions around the connection between interracial porn and right-wing or conservative political leanings. It's not just Paul Manafort with that particular mix.
"It's usually taken for granted that men watch porn to insert themselves as the male in the video. But does this really work if the man is of a different race than the viewer?"
I would say both of these statements are questionable. Given porn with only women is also quite popular, it's not clear that self-insertion is the motivating factor. Nor is it unbelievable a man could insert himself in a person of another race. I doubt the average viewer looks much like the male porn model in many other ways, such as age, size, fitness, etc, but race is somehow a bridge too far?
"What if the main viewers are gay white men who instead actually insert themselves as the girl in some kind of sadomasochistic fantasy?"
It seems much more likely they would be watching interracial gay porn.
PornHub does fairly extensive research on this type of thing with their data. But I am not sure if they address this questions specifically and I don't really want to research it on my work computer...
I'm not sure if the animalistic part is the problem. Bad romance novels about "cute naive maiden & big bad werewolf" are, apparently, catnip to a large market segment.
> Women don't tend to seek out visual porn, and when they do, I don't think it's of the "tiny girl smashed by five thugs" variety.
According to my girlfriend such genre is exactly what a huge demographics of women seek. Especially if it has good storytelling from woman perspective.
It doesn't have to be so overloaded. Maybe people - of whatever race or persuasion - just want to see 'girl smashed by big dick, and liking it'.
My totally unfounded working theory is that any particular porn subgenre has a strong audience for whom it is taboo. Interracial porn is watched by racists, transgender porn is watched by transphobic people, and BDSM porn is watched by prudes. The prevalence of step-sibling porn is because it's just on the edge between "taboo enough that it's exciting" and "not so offensive that you feel like a freak for watching it".
There is something about BDSM that matches being a prude really well. Self-discipline, and all that.
"BDSM porn is watched by prudes" does not seem to match the pattern set by the rest of your examples. Wouldn't something like "BDSM porn is watched by people who dislike the suffering of others" be a better fit?
That describes a lot of people. I think the person's social environment has a lot to do with what taboos they find exciting, so if you're a bleeding heart hippie, probably no one will care whether you have a mild kink, because conspicuously disliking the suffering of others tends to go together with sex-positivity. But if you're an evangelical Christian, the opposite is true, so insofar prudes watch porn at all, it'll be more popular if it's kinky in a way that their peers would disapprove of.
Wouldn't their fellow prudes disapprove of any sort of porn?
Sure, you can change 'BDSM porn' to 'any porn' if you want, but that doesn't contrast as nicely.
Agree with this analysis. Iran has the highest percentage of gay porn consumption out of all countries, for instance.
Exactly. What a classic example of a paranoid conspiracy theory. People will postulate no end of ridiculous assumptions, so long as they get to think it’s all about them.
Bit of self-promotion here (please delete if it's too spammy!) but I wrote a post comparing forecast accuracy for PredictIt, FiveThirtyEight, and Manifold Markets, looking at their predictions for the US midterm elections:
https://mikesaintantoine.substack.com/p/scoring-midterm-election-forecasts
If people are interested I'll keep doing these types of analyses for future election cycles, as well as for other types of prediction markets and forecasts (like sports betting, weather forecasts, etc).
Regarding diminishing marginal returns to charitable dollars, I think scott is clearly right, but SBF explicitly said the oppisite on the 80,000 hours podcast earlier this year.
"If you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore…"
https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/
The least comfortable bit of all this for me personally, apart from losing some trivial amount of cash I had parked in FTX (which I would have probably gambled away anyway had it not been stolen, so I'm OK with it) is watching a huge crowd of EA people I had previously respected making arguments that mostly seem to boil down to 'doing crimes for our benefit is bad because it might be bad PR for us and thus not actually benefitting us'. Honourable mention to EY who seems to have grokked that hurting people isn't good, even if you're much more clever than they are and thus they deserve to have their money taken and redistributed for the greater good. The whole thing just makes me want to stand on a roof and scream out the gods of the copybook headings until I fall off it.
I mean, dude, the central thing that utilitarians or consequentialists believe is "hurting people can be good." Like, the very first thing that anyone learns about this philosophy is the trolley problem, in which their answer is "you should murder a guy (so that five guys don't die)."
If you're a utilitarian or consequentialist who has a hard rule that says, "You may not hurt people, no matter what," congratulations, you're not a utilitarian, you're a deontologist.
I don't think anyone actually works this way. For example, child vaccination is non-consensually causing pain to a child for the greater good. You need a coherent theory of when to do this vs. not do this, which I think rule utilitarianism provides.
Sure.
I'm just saying, when John Roxton upthread says, "hurting people isn't good, even if you're much more clever than they are," that statement is rejecting utilitarianism. The entire central core of utilitarianism is "I believe that someone can be clever enough to hurt someone and have it be good." If you don't believe that, you aren't utilitarian.
Obviously you don't need to think that this particular event is good -- but if you reject it by stating the principle that "hurting people is universally bad," you're rejecting utilitarianism. If you reject it by appealing to the specific balance of good and evil involved in this situation, then you're doing so from a utilitarian framework.
I think you are wrong about what utilitarianism is or isn't. I have an old and extremely embarrassing Consequentialism FAQ that I wrote when I was in my early 20s and much stupider - it's not actually that good, but it's probably the best explanation I can provide. See https://web.archive.org/web/20161115073538/http://raikoth.net/consequentialism.html
I think I'm right about what utilitarianism is and isn't (though I stated it informally and starkly, and if we were going to quibble around the edges of the definition you could probably find a few ways to manage to have a not-actually-real-but-coherent utilitarian philosophy that forbade actually hurting anyone). And I also think that this isn't even remotely controversial. Every utilitarian ever has engaged with the trolley problem from the perspective of "I can think of toy examples where hurting people is okay."
(I also don't think that this is like "the big problem with utilitarianism," or anything -- I agree with you that people in general, no matter what their philosophy, don't hold an inflexible rule of "nobody should ever be hurt.")
To briefly engage with your FAQ: yes, weak rule utilitarianism is, as you point out in the FAQ, a heuristic. Strong rule utilitarianism is, I think, not actually a philosophy that any meaningful number of people hold, and is basically deontology.
How are you distinguishing strong from weak here?
I think part of the right answer to "why shouldn't you do bad things for the greater good" *is* because those things so rarely end up leading to the greater good, and "PR disaster" is an easy generic example of why that should be.
I think Gods of the Copybook Headings is important because those are principles passed down by cultural evolution, and the reason cultural evolution gave us those principles was because fit societies survived, and the reason societies that didn't do bad things for the greater good (which theoretically sounds great for a society!) were less fit was, indeed, because they kept blowing up in various bad ways. So I don't think there's a conflict between the "it will cause such-and-such bad consequences, here are some examples" perspective and the "it's just evil, don't do it" perspective.
Yeah. Take the organ theft thought experiment, for instance. It's trivial to reject it on the ground that it will keep people from going to the hospital if they're anything but terminal, and even if claimed to be kept secret, you should not accept that this will be successful. Only when phrased as "no, you have to accept the premise that perfect secrecy will be maintained" does it even become a challenge, and then you're so far away from any real-world situation that it doesn't matter except as an intellectual exercise.
"you're so far away from any real-world situation that it doesn't matter except as an intellectual exercise"
Yeah, that's true, in which case it doesn't matter if you pick the option "to hell with these five guys, let the trolley squish them, I wanna hear the screams" because it's nowhere near what you would do in the real world, and if the person proposing it tries to argue you into "but you should do the greatest good", then it has to be applicable to real life. Since it's highly unlikely any of us will be standing by a switch on a track, when a runaway trolley comes down the line, *and* there is one person tied to the track on this side *and* five people tied to the track on the other side, then it has no real world application and you may as well let your inner Snidely Whiplash free.
The surgeon one is a decent plot for a B-movie horror of the old school, but real life? Don't be silly.
The trolley problem with an actual trolley doesn't happen in the real world, but the problem "I have to hurt some people to help some more people" happens all the time. *Ordinary taxation* is such a trolley problem.
Or much maligned "death panels", which really are a thing, unavoidable one at that. The big issue with trolley problems is that they assume total certainty about outcomes and no second order consequences, and those features definitely are very far away from real world situations.
The "PR disaster" point of view is exactly what led the Catholic church to its current disaster of a situation concerning sexual abuse scandals. Because when you are in the PR disaster mindset the solution is not to not do bad things, it's to keep them secret.
The best option is not to do bad things, but once the bad things have been done, the more you are focused on "keep the bad things from causing a scandal," the less energy you have for "keep the bad things from happening again."
Especially if preventing recurrence requires admitting that it's happened before
Yes, agreed. If something is true it will be true whichever way you reach it.
I guess my worry is more that... suppose SBF had got away with it, this time. Suppose that he made that massive gamble [edit: using embezzled client funds], it paid off, and then he missed a dose of his Parkinsons meds or whatever he was on and woke up and thought 'oh shit that was mental we'd better not do that again', and he carefully put all the money back where it should be, spent his massive surplus on mosquito nets and *no-one ever found out* and he never did it again. (sidenote: how many times did this or something close to it happen in reality?)
In this scenario, did he do anything wrong? I think this is where we will start to see the approaches diverge.
I think the answer is something like: if it was just a massive gamble that didn't break any laws or ethical injunctions, like me spending my own money on the lottery, and it was +EV, then he did nothing wrong.
Since he did break the law and ethical injunctions, I think even if it counterfactually turned out +EV in the end, he did do something wrong, and that thing was to break the law and ethical injunctions. Maybe in the counterfactual we might judge him less harshly, for the same reason we judge attempted murder as less bad than murder, but we should at least judge him a little. Again, see my post https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
>even if it counterfactually turned out +EV in the end
That's not how EV works. EV is the average of outcomes of all possible futures multiplied by their probability, which is estimated before making the decision, it's what the 'expected' part means. As you argue, in such situations it's overwhelmingly likely to be very negative.
It's possible that a -EV decision ends up being profitable in our universe, like deciding to buy a lottery ticket that ends up winning, but that doesn't retroactively make it a good one.
Yes, I didn't make it clear but in my imaginary scenario above he was doing what he was (allegedly? do we still have to say that?) doing in reality, ie making massive high leveraged gambles with other people's money without their knowledge or consent and in violation of his own ToS as well as what I imagine to be several laws (fraud is fraud even if you aren't regulated, sorry).
I know this will sound silly but I'm really relieved to hear you call it out as wrong; it would have been a pretty brutal prior adjustment otherwise. Thank you for taking the time, and all the best with your bed-gibbering.
Maybe this is really dumb, but I am just dying to figure it out.
Why were all of these grants dependent on FTX? I don't understand how grants can be given out that depend on the stability of FTX. Were grantees not given actual cash? I've never received grant money so I don't know how it actually works. But like, what was FTXFF *actually* doing? Where did the money come from? I would have assumed it was just cash that came out of the revenues of FTX. Again, I'm sure this is super naive and ignorant, but I just want to understand... I would have thought that the collapse of FTX meant that no *future* grants would be awarded, not that all existing grants are bunk.
To expand on Scott said, if you receive large sums of money "for charity" for somebody shortly before they go bankrupt in a massive multi-billion dollar fraud scandal, the bankruptcy lawyers will be curious how you managed to come out ahead.
Not legal advice and very fact dependent but it may be the case that people who already received, and in some cases spent, money may need to get some of it back to be distributed to other creditors of FTX. Similar things happened to people who, luckily or not, timed Madoff correctly and realized large gains from him. More detail in this link, also linked by Scott above
https://forum.effectivealtruism.org/posts/o8B9kCkwteSqZg9zc/thoughts-on-legal-concerns-surrounding-the-ftx-situation
In case you meant another angle, at this level charities raise Grants to try to do things i.e. run a study, buy and distribute 10k malaria nets and similar. If they were going to just pocket the cash to spend on indeterminate charity then the grantors would (generally) rather just keep it to themselves spend on indeterminate charity later.
I think it's something like: FTXFF has some amount of money in its treasury. They promise a grant to so-and-so in the form of let's say four installments of $10,000 each. Someone gets the first installment and is planning on three more. FTX goes kaput. FTXFF freezes its treasury because they're concerned about "clawbacks" in the bankruptcy proceedings. Also, the entire team of FTXFF quit in protest so even if they had the money there would be nobody to give it out.
If it's right that FTXFF has the money (i.e. fiat currency) in its treasury, but has frozen it, then the trustees ought not to have resigned, but should instead have remained in post to attempt to get that money to the grantees. Somebody will need to brief counsel for FTXFF if orders are sought against it in bankruptcy proceedings.
What I suspect may be the case is that either FTXFF doesn't have the money at all, but was making grants in the faith that FTX or related parties would provide money in future, or FTXFF has "money" in the sense of holding now worthless tokens. Either of those scenarios would seem to raise a question about why USD amounts were being committed, although I'm not familiar with the norms around grant-making foundations.
What confuses me on this is that everyone seems to have been planning to spend money immediately, and assuming it would reliably arrive in the future. Which is a risky assumption to make even if FTX was entirely on the level, as they work in a notoriously volatile industry, or could just change their minds at any point and withdraw funding. Surely the thing to do is save money from donations to spread out over time and mitigate risk? And generally try and have as diverse sources of funding as possible
Charitable funding comes in all sorts of different varieties, which include lump sum one-time payments, grants paid out in installment, "general support" dollars to be spent on all the operations of the organization that gets the money, and project-specific grants that are dedicated to (and may be the entire funding for) the staffing and other costs of a specific project.
It's pretty common, if an organization gets a grant promising 5 payments over five years to fund a specific project, that they would hire employees, rent office space, and otherwise plan to spend those promised future payments, such that their failure to arrive would be painful and disruptive.
Right there with you on the FTX situation. Trying to figure out what was my cognitive error in so strongly presuming that this one particular dude wouldn’t make this one particular obviously wrong series of decisions. The best I can do is, we all have blind spots for people who are sufficiently similar to us in background/culture/etc. We map those real similarities out to presumed similarities in other areas like risk preferences, proclivities to steal customer money, and so forth, even when we should know they don’t hold.
Essentially my model of SBF was, “what would I have done if I were SBF?” But you know there’s a reason I was never in a position to be SBF and that reason directly relates to the stuff that happened subsequently and so the differences should probably be more dispositive than the similarities at that point.
It's hard to clearly see someone who's raising your own status with their actions. You're incentivized to believe them.
"Right there with you on the FTX situation. Trying to figure out what was my cognitive error in so strongly presuming that this one particular dude wouldn’t make this one particular obviously wrong series of decisions."
I am not trying to pile on, but ... wasn't FTX offering 8% interest rate return on held assets? This was lower than Celsius, but still high enough that the money clearly wasn't being held in a 'savings account', right?
My working assumption is that pretty much every crypto exchange that offers returns like this is running some sort of scam because how else do you generate those returns? And I make the same assumption about the non-crypto folks who advertise these sorts of returns on the radio.
Doesn't this sort of return (with the implied lack of risk ... it isn't like they tell you they are 'investing' your deposits in stock shares) fit the 'it it sounds too good to be true ...' maxim?
8% on the first $10k isn’t like “this must be a ponzi” type returns, that to me just implies that they are staking in defi protocols — not a zero risk undertaking but very different from running a scam.
(And yes I am embarrassed to know the words “staking in defi protocols”)
This FTX stuff sucks.
I don't agree with a lot of EA priorities, I think people in poly relationships and people under 30 are reasonably poor choices for trust in general and with zillions of dollars in particular, and I was simultaneously leary of crypto and jealous of these huge returns...
But man, this sucks. I am so sorry for everyone from dodgy investors to innocent and well meaning charity types, and I am going to esp remember in my prayers those deluded ratbastards who drove it into the ground.
Is there some way someone could set up a link or something where I can donate money to collapsing charities without e-mailing Scott? I don't have enough money to bother Scott with but I'd like to help out.
I think eventually something like this will start existing; I'll let people know when it does.
Maybe it's https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/announcing-nonlinear-emergency-funding , I'm not sure.
I'm not here to dance on anyone's bones, and to all the people hiding under beds gibbering, have a "There, there" reassuring pat on the back.
That said, I do want to make one comment, which is "this is the reason EA should stay the chocolate fudge cake *away* from politics":
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America. You, a random AI researcher who tried Googling “who are these people and why are they giving me money” before accepting a $5,000 FTX grant, don’t need to feel guilty for not singlehandedly blowing the lid off this conspiracy. This is true even if a bunch of pundits who fawned over FTX on its way up have pivoted to posting screenshots of every sketchy thing they ever did and saying “Look at all the red flags!”
Yeah. Exactly. Bankman-Fried and at least a couple others were well-connected via their families, so they had an in with places like the SEC when it came to "Oh, that's my old college pal So-and-so's kid, I'm sure they're just fine!" I know Bankman-Fried was not EA exclusively, but he did want to Do Good amongst other things, and making a shit-ton of money off crypto (pardon the crudity) was one way of doing that, and high-risk was the way to make a shit-ton of money fast; the higher the risk, the higher the reward.
I won't pick on the Dems for having taken money off him, as Republicans would equally have trousered the cash had his political leanings permitted him to donate to them. That's the problem that I wish EA or EA-aligned people who want to be politically active would consider more: yes, they will take your money. And yes, in return they will (appear to) listen to you, and take your calls, and you will think you are doing great and getting ahead on having the people in government becoming aware of The Issues, and meanwhile they will just be ticking off "list of meetings with donors today" in their diaries and it means nothing more than "keeping the cash-cows happy and mooing".
Stick to bed nets. Heck, even AI risk. Stay away from the swamp.
You're not being cynical enough.
Yes, absolutely politicians will fake interest in your issues to keep the money tap flowing. But do you think that they are somehow above also changing the law the way you want to keep the money tap flowing?
Now, I personally don't endorse spending large amounts on politics - but that's because I think you can get the same impact for a lot cheaper.
Not sure what you mean. Are you saying the political funding played a role in this disaster, or just predicting that it wouldn't have worked even if FTX had done well?
My position is: I agree that when SBF calls up politicians and talks to them about pandemics or whatever, they're not having deep emotional engagement with his arguments. But I think there are like fifty issues that politicians aren't having deep emotional engagement with, and part of (the most cynical interpretation of) their jobs is throwing bones to big donors, so if SBF did this enough times eventually they would say "sure, have some pandemic preparedness" and spend some of their political capital on doing what he wanted so that he kept donating. I don't think that's incompatible with "they just want to keep cash cows happy".
I do agree with you for other reasons though, see https://slatestarcodex.com/2015/09/22/beware-systemic-change/
My guess is that SBF was especially into this because he came from a family of Democratic fundraisers, and that the rest of EA won't put as many resources into it.
"Are you saying the political funding played a role in this disaster, or just predicting that it wouldn't have worked even if FTX had done well?"
The latter. EA as a movement seems to be drifting away from its roots or initial premises. Maybe it was always like this, and we are just now seeing it, but there is a big difference between "let's evaluate the best use of charitable giving to see what really does return the most value and positive changes" to "let's try to get a guy elected and donate money to politicians just like any other lobbying group".
"(P)art of (the most cynical interpretation of) their jobs is throwing bones to big donors, so if SBF did this enough times eventually they would say "sure, have some pandemic preparedness"
Yeah, you'd get it. I don't know if the US has quangos, but that's what you'd get:
https://en.wikipedia.org/wiki/Quango
Committees to prepare studies on possible papers to construct policies for consideration by a parliamentary sub-grouping. Lots of featherbedding, lots of opportunities for patronage by political parties to hand out appointments as rewards to party hacks and long term servants.
Tell me again, how is the Green New Deal doing and the new Green Economy and all those jobs that were going to be clean, high-earning, replacements for the rust belt?
https://en.wikipedia.org/wiki/Green_New_Deal#Legislative_outcome
You will certainly get bones thrown for pandemic preparedness. I'm sure Anthony Fauci wouldn't mind taking on another directorship. And hey, we could even add in Alexandros and his ivermectin and alternatives recommendations to make sure we cover all the bases!
A couple of warehouses stocked up with masks and PPE equipment, vaccines, the works!
Yeah, you'd get pandemic preparedness and the politicians showing the nice paperwork to the donors to keep them happy. But would this be any actual use?
There's a theory that SBFs involvement with politics was a way to buy reputation.
yes and it worked very well, ftx was the last exchange people thought would blow up, and the level of criminality is huge, on the level of Bernie Madoff.
Nah, his mother and brother had been doing Democratic politics stuff since long before SBF made money, I think that was a genuine interest.
Could it be both?
No one in crypto had a closer relationship to Gary Gensler than SBF and Caroline Ellison.
So people who saw some red flags in the last few months probably (and reasonably so) said: well clearly FTX and SEC have a close relationship, the SEC probably looked at that "red flag" and saw that it was all good.
sbf was actively lobbying for crypto legislation that would benefit his centralized exchange at the expense of decentralised crypto protocols.
Weird random question, one of my ribs is pushed almost 1cm back, and seems to be disconnected, or partially disconnected from the sternum. Been to a GP twice already, and they basically said that nothing can be done.
It is kind of a constant irritation though, so anyone has experience with this? Should I push for a specialist consultation? Or is this really something that cannot be fixed.
It is normal for some ribs to be disconnected from the sternum, oddly enough.
If it gets in the way of movement or otherwise annoys you, you'll get better help from a physical therapist, not a doctor.
Something like that happened to one of my lowest ribs. It felt like it was pushed inwards and slightly up, catching on the rib just above it. Eventually it went away.
IIRC, the things that helped were serious stretching and yoga, aimed at torso flexibility.
If it's bothering you, I'd try to see a specialist. He may well give you the same answer, but at least he is more likely to have an in-depth understanding than your GP.
Don't know about ribs specifically, but in general the threshold for going in and doing surgery close to vital organs is much higher than people think. Every time you do that, there's serious risk of damaging nerves, having unexpected bleeding, infection, etc. So your doctor might be correct in their assessment.
If it bothers you due to psychological distress (body disphoria), you really just need to get used to it. But if it's actually painful, or you suspect it's getting worse, I would try to get a specialist to look at it.
Someone who got his nerves permanently messed up and lost the use of his limbs (and, as a result, his job) because he followed a doctor's advice to get surgery:
https://twitter.com/razibkhan/status/1591684351657385986
I don't see how this is relevant to IJW's question about whether he should get surgery on that rib. Of course there exist people who are permanently messed up because they followed a doctor's advice to get surgery. There also exist people who are permanently messed up because they did not follow the doctor's advice to get surgery -- and people permanently messed up because the doctor should have suggested surgery but did not. And?
The relevant case would be the last one, and I'm not aware of many cases of that. I do know from the RAND health insurance experiment that people given more money for it will consume more medical care, but will not have better health outcomes.
I'm not a doctor, a nurse, or even a home health aide! Here's my intuition, though. Seems kind of important to have that rib connected to the sternum. Your heart's under there! Also, broken ribs can sometimes puncture lungs. If this one breaks all the way off from the sternum, mightn't it puncture something in the area? Also, seems like bones would be easier to work with than soft tissue. Why can't a doctor put in a little metal plate or something that reattaches rib to the sternum? Seems like everything you need to reach is right there on the surface, under a thin later of flesh, so pretty easy to get at.
HOWEVER, that's all just my intuition. Would be good to ask the GP why there's no cause for concern, and why nothing can be done. Unless the answer is very convincing, go see a specialist.
I've had broken ribs twice and yeah, basically you just have to wait them out. For the first week or so be extra cognizant of any difficulty you may have breathing, looking into signs that the loose bone may have done damage to internal organs. It sucks, I wish you a speedy recovery!
No I have had this for years now. It hurts only sometimes, and mildly. But it is annoying. I think it has solidified somewhat and I have honestly no idea how it happened. I never felt a sharp pain there. But one rib is clearly pushed in about 7-8mm.
Hmm yeah if it's uncomfortable then I'd go see a specialist. My ribcage is still asymmetrical 10+ years after the first broken rib, so that's definitely not gonna fix itself.
I too have had two broken ribs from falls taken while trail running. If it hurts like crazy when you deeply inhale, it is probably fractured. The problem is that some rib fractures don't show up well in imaging. In any case my physicians bound my chest once (it's kind of a corset) and the second time I requested no binding, as the binding didn't serve any useful function I could see. At least that's my memory of events.
Following the blowup, I've been constantly wondering how much overlap it had with ACX. But the answer seems to be "waaaay more than I expected" which is wild.
I’ve recently started a free Substack blog on the psychology of belief, called Beliefology, and I’d be grateful for any feedback on my latest post. I’m particularly interested in whether people think that the reasoning presented in the post is a) clear, and b) sound. I’d also be interested in what people think about the writing.
The post is here: https://beliefology.substack.com/p/why-do-we-believe-what-we-believe
Correct. :)
Caroline Ellison, who ran Alameda Research when it blew up was a reply girl to your tumblr: https://twitter.com/0xHonky/status/1591630071915483136?s=20&t=8M2y-KFZDqy1TGa_hHbIAw
her own tumblr was also... interesting
https://twitter.com/AutismCapital/status/1591601671943393280?s=20&t=8M2y-KFZDqy1TGa_hHbIAw
"Low risk aversion"
Might have to rethink that one.
"Of course those people are lame and not EAs"
Oh dear spirits of all the departed drunken journalists who ever boozed their way into the grave, this is the kind of publicity EA does not need. And this is why mocking the normies, even amongst yourselves, is a bad look. Don't do it in public and never write anything down. Stick to making quips at the kind of parties Scott has described for us.
"Here's what I think are some ~cute boy things~
- low risk aversion"
Well Caroline, your 'hot bad boy' low risk averted yourself, himself and several others into possible jail time (well not really likely but a Bahamian luxury compound worth's of trouble anyhoo).
I realise this was a young woman being silly online, but this is why, children, you never shitpost under your real name! You have no idea what is coming down the track!
There's going to be a lot of gloating about dumb stuff the people involved said. A *lot*.
"There's going to be a lot of gloating about dumb stuff the people involved said. A *lot*"
And all of it will be shabby tabloid-culture piling on.
There's no end of things to scourge these people for, but having lots of casual sex in various conformations and doing nootropics are completely irrelevant to their transgressions.
Wrong. They were stimmed up daily.
The mocking of 'normies' really bothers me. If I'm not a normie, I'm normie adjacent.
"Many of these trees were my friends", Treebeard.
She didn't use her real name, it was an anonymous Tumblr and someone doxxed her.
I think if you are a random Jane Street trader deciding how to invest, it's right to be risk-neutral at least up to the size of some large fraction of the treasury of Jane Street. I think once you become big enough that there are externalities and you become a significant fraction of all the money in the world, that breaks down. I would have thought Caroline (including the version of Caroline who wrote that post) knew that, which is part of why I'm waiting to see what the explanation here is.
That's the trouble, there is always doxxing. We none of us know what is going to come back to haunt us down the line, unless we're so insignificant that it will never be more than "do you really want your mother to read that?" levels of potential embarrassment.
But even 'jokes' about how smarter and more efficient etc. you are on drugs than those poor dumb normies aren't really funny and do show a level of immaturity or arrogance that we now see got her into trouble: "well if Sam says it's okay, then I'll do it, because Sam is so smart".
"I would have thought Caroline (including the version of Caroline who wrote that post) knew that, which is part of why I'm waiting to see what the explanation here is."
Are you serious? From posts like those? Scott... you may have to seriously consider whether you have a severe gullibility problem. I'm not trying to dunk on you here. Between this and your post about believing people who say crazy unverifiable shit, you seem to be bad at a core skill of human interaction.
I think his reference to having being friends with a member of the community who moved abroad is pretty clear. They read and cross-posted each other's stuff.
Also, I suppose none of it will get much of a fair read in "this timeline" to borrow Scott's lens, but Caroline's writing is compelling and smart. It looks like her blog has been taken down, which is probably also smart on her part. But I think she's an intelligent person and I'd be much more interested in her memoir someday than Sam's.
But since she's smart, she probably will also listen to lawyers and lay low from here on out. And hopefully enjoy some non-internet based hobbies and craft.
It's the "why do girls like bad boys?" thing all over again, and whether it's "why do you like this drug-dealing petty criminal loser over the nice respectable guy", then it's "he has low risk aversion and that is so attractive".
Drug dealing petty criminal or guy who got in way over his head with billions of other people's money, no it's not big and it's not clever and grow up girls.
Given the results of her trading style it doesn't seem like much changed in terms of a trading thesis, except a new active participation in criminal activity.
Maybe this is the way to tie in those "normal human vices" Scott mentioned. If SBF was making massive undercollateralized loans of customer funds to impress a girl, well, that's at the very least *understandable*.
It does make me update towards workplace relationships being a much worse thing than I previously thought. I had always thought no-fraternization policies were just authoritarian garbage meant to stamp out any hint of humanity in the business environment, but now I see their purpose.
this fraud was going on for a long time, i dont think it was caused by one relationship
Was it? I'm sure bad accounting practices were going on the whole time, but I doubt Alameda was insolvent before the Luna crash.
not insolvent but they had bad practices going on before then. access to exchange data, lying about potential returns to investors, the Reef finance saga. this is more recent but tells you about their attitude: https://twitter.com/pythianism/status/1591105870142001152?s=20&t=LczwmgHHEFOQy7Cd5GFdnA
"The arc of rationalism is long, but it bends towards meth-fueled sex cults"
Okay, pushing back against this, the only argument anyone has for meth is a tweet of Caroline's saying she feels much better on amphetamines. The obvious explanation for this (given that she tweeted it) is that she's on Adderall for ADHD (or "for ADHD").
Also, although Caroline has experimented with poly at different points in her life, I don't see where people are getting the polycule sex cult thing. There was one article saying that "all the higher-ups were having sex with each other", but that seems consistent with SBF having sex with his girlfriend Caroline, male-executive-A having sex with his girlfriend female-executive-B, and so on. All the actual descriptions of relationships I've seen are between two people.
I have no evidence there *wasn't* meth and sex cults, I just don't see any evidence that there *was*.
Apparently there was employee orientation on optimal stimulant stacks, which certainly is on brand for EA and rats.
it's all so pathetic
It very probably wasn't anything as lurid as rumour is making it out to be, but when there is a big juicy scandal that the general public can get their teeth into, it's even better if sex 'n' drugs (rock'n'roll optional) are involved.
If you're talking about "these people were living the billionaire lifestyle on stolen cash", then the Schadenfreude of righteous indignation is not as fully enjoyable if the reports are "and then they all went to bed at ten p.m. after having their cocoa and got up early and had a healthy breakfast". If you're going to live the billionaire lifestyle, whether or not you are going to be involved in dodgy deals, why not go for full-on decadence rather than bland boring beige conformity? Part of the deal of having super-rich elites, be they emperors or billionaires, is that they provide spectacle and colour for the enjoyment of the commoners!
The Good Rich Man
by G. K. Chesterton
Mr. Mandragon the Millionaire, he wouldn't have wine or wife,
He couldn't endure complexity; he lived the simple life.
He ordered his lunch by megaphone in manly, simple tones,
And used all his motors for canvassing voters, and twenty telephones;
Besides a dandy little machine,
Cunning and neat as ever was seen
With a hundred pulleys and cranks between,
Made of metal and kept quite clean,
To hoist him out of his healthful bed on every day of his life,
And wash him and brush him, and shave him and dress him to live the Simple Life.
Mr. Mandragon was most refined and quietly, neatly dressed,
Say all the American newspapers that know refinement best;
Neat and quiet the hair and hat, and the coat quiet and neat.
A trouser worn upon either leg, while boots adorn the feet;
And not, as any one might expect,
A Tiger Skin, all striped and flecked,
And a Peacock Hat with the tail erect,
A scarlet tunic with sunflowers decked,
That might have had a more marked effect,
And pleased the pride of a weaker man that yearned for wine or wife;
But fame and the flagon, for Mr. Mandragon obscured the Simple Life.
Mr. Mandragon the Millionaire, I am happy to say, is dead;
He enjoyed a quiet funeral in a crematorium shed,
And he lies there fluffy and soft and grey, and certainly quite refined,
When he might have rotted to flowers and fruit with Adam and all mankind,
Or been eaten by wolves athirst for blood,
Or burnt on a big tall pyre of wood,
In a towering flame, as a heathen should,
Or even sat with us here at food,
Merrily taking twopenny ale and cheese with a pocket-knife;
But these were luxuries not for him who went for the Simple Life.
Given that we have Caroline explicitly saying that she is sexually attracted to boys with low risk aversion, it seems reasonable to assume that there is some sort of causal relationship between the sex and the terrible financial decisions that bankrupted the company.
Add in the regular amphetamine use, and you're one methyl group away from calling the operation a "meth-fueled sex cult".
I am not a biologist, but "one methyl group" seems like it would make a pretty big difference to me. But even if some members of that company used legally prescribed Desoxyn instead of Adderall, "meth-fueled sex cult" kinda imply orgies under the influence of methamphetamine or some such.
I think it is reasonable to hold potentially slanderous claims to a higher standard of evidence.
I have little regard for "crypto" (meaning crypto currencies) in general, but the claims imply for me:
a) some group X is a cult
b) the group uses sex in rituals, as rewards or some such
(a cult whose members have a sex life does not make it a sex cult, or about every cult would be one)
c) the cultish activities, sexual or otherwise, are depending on or enhanced by methamphetamine.
My understanding is that the difference in effect between meth and regular d-amphetamines is mostly a combination of dosage and method of administration.
I'm saying d-amphetamines I'm particularly because they generally have more of a dopamine effect than l-amphetemines, while the l-amphetemines produce most of the norepinephrine effects. From a recreational abuse standpoint, you want more dopamine (pleasurable high, burst of energy, etc) and less norepinephrine (which tends to be deeply unpleasant past a certain level). Benzedrine or Evekeo are 50/50 mixes of the two isomers, Adderall is a 3:1 mix of d- and l-, and Dexedrine is pure d-. Recreational meth is typically mostly d-methamphetamine, while l-meth is sold over the counter as a nasal decongestant inhaler.
That said, d-amphetamine (Dexedrine) and d-methylphenidate (focalin) are both regularly used as prescription meds. For that matter, meth is legally produced and prescribed in pill form for the same sorts of things (mostly ADHD or narcolepsy) that the others are prescribed for. And by most accounts, the effects are pretty similar.
One key difference in meth-as-street-srug is that it's usually snorted or smoked, which gives you a big rush of drug in your system all at once very quickly after taking it. If you swallow a pill, it needs to go through your stomach first and get absorbed in your intestines, which slows and spreads.out the dose in your system. Conversely, d-amphetamine is also sold as Vyvanse, where it's bonded to a lysine amino acid that needs to be metabolized off of it before the drug does anything, spreading and delaying the peak blood concentration still further: this is done specifically to reduce the abuse potential.
The other big difference is dosage. Typical therapeutic dosage of meth is 5-25mg daily, often divided into morning and afternoon doses. A quick googling suggests that a typical recreational dose of meth is more than an order of magnitude higher, around 200 mg in a single dose.
"Attracted to people with low risk aversion" is just a nerdy way of saying something like "bold, daring men".
If we learned that Jill Biden had written in her diary that she was attracted to bold, daring men, and then Biden took some decisive action like the chip sanctions on China, would we describe the White House as a sex cult?
Scott, can we describe the White House as anything *but* a sex cult? After what Bill got up to in the Oval Office, is there anything that has not been reduced to "powerful men like getting their end away, and impressionable women like servicing powerful men"?
Of course the White House is a sex cult, it was built by freemasons :)
>"Attracted to people with low risk aversion" is just a nerdy way of saying something like "bold, daring men".
I would accept this argument were it not for the fact that almost everyone involved is a current or former quant trader with a rationality blog. There really does seem to be a cult of low risk aversion in the mathematical decision theory sense among FTX/Alameda executives, not just the bold, daring action sense.
We have to explain the fact that a bunch of intelligent, well-meaning people ended up St. Petersburg paradoxing themselves into incredibly stupid and evil decisions. There has to be something else going on, and polyamorous status jockeying (https://i.redd.it/454qxctl7pz91.jpg) fits what little we know like a glove.
Yes, a lot of it is speculation (the doctor they had on retainer exclusively to subscribe meds is just an unproven rumour but apparently a common thing among trading firms), but these people seem to have been abusing drugs daily for years, unless daily consumption stims and sleeping pills is not considered abuse: https://twitter.com/SBF_FTX/status/1173351344159117312?s=20&t=LczwmgHHEFOQy7Cd5GFdnA
Again, I don't want to defend them but I think there is a difference between "used Adderall as prescribed, albeit by a sketchy doctor" and "on meth", and it's not fair to replace one with the other to make something sound more lurid.
(even though I am on record saying the difference is somewhat less than usually believed, see section II of https://astralcodexten.substack.com/p/drug-users-use-a-lot-of-drugs )
That's fair, and my initial comment was obviously not completely serious, I guess Yudkowskys shenanigans were on my mind as well when I wrote it. The last week has been a huge disaster for crypto and everyone involved, it will be a long time until all the truth comes out.
Have you read (or reviewed) Larson’s “Myth of Artificial Intelligence”? Curious as to your thoughts.
Not going to watch a 43 minute video, but yes I agree with the conclusion. Peterson uses the religious language, but he is not religious in the sense that religious people would recognize. He mostly uses religion as a metaphor.
Thank you for leaving,
I share those concerns, but I'm not here because I "support" the Substack. I'm here because it's read by a lot of smart and potentially influential people who pride themselves on their intellectual openness but have disappointingly shallow understandings of ideas that matter deeply to me, such as Kantian ethics and feminism, so there's a decent expected return on articulating basic defenses of these ideas.
It's quite possible that, like many Internet spaces before it, this community is on a self-reinforcing slide towards the alt-right as people like you get driven away and replaced by increasingly open reactionaries. But even if so, there's a long period in this pattern where it's worthwhile for decent people to resist the process, refuse to be bullied away, and impede alt-right radicalization by calling out obvious strawmanning of progressive ideas (without getting baited into pointless debates); imo ACX's professed open-mindedness and no-downvotes interface make this effort worthwhile for at least a while longer.
What do you think about the self-reinforcing slide toward the woke left in every corporate, academic, and cultural sphere, as people like me are driven away and replaced by increasingly open SJWs? What do you think about the bullying, the woke-left radicalization, and the strawmanning of conservative ideas in literally the rest of society? It's great that you're here and contributing to the intellectual diversity of this forum, but what I want to say is that the disdain toward woke ideology doesn't come from nowhere. It comes from exactly the things you're complaining about, just done by the other side, the side that happens to have hegemonic power over society right now.
I similarly believe that both conservative ideas and left-leaning discursive spaces benefit when intelligent conservatives leave their comfortable safe spaces and sally forth to politely defend their ideas, call out strawmen, and generally keep progressives from wandering out of our mottes. In the left-leaning political groups I belong to, I'm always nagging my comrades to give moderate and conservative viewpoints their due.
I do think that "woke left" ideas dominate in academia in part because they're more correct, and therefore more fit to survive in a highly competitive memetic environment where people compete for professional status by mustering empirical and theoretical arguments to overturn others' theses and defend their own, but there's certainly an element of groupthink as well. This leads dumbed-down versions of these ideas to spread among young people as their average exposure to higher education increases, which in turn leads corporations chasing young customers and talent to espouse capitalism-compatible bowdlerizations of these ideas (e.g. DEI instead of Black Marxism). There are lots of opportunities at every step of this process for smart, articulate conservatives to check excesses, and I think this would be much better intellectual citizenship, and better tactics, than retreating to alt-right fora where you can sneer about woke strawmen in peace.
Why would intelligent conservatives do any of that when the rationally expected result is to be dogpiled with political and personal vitriol, possibly extending back into meatspace if one is insufficiently anonymous, and our actual defense of our ideas is ignored in favor of easy, entertaining strawmen?
The number of places that aren't clearly "conservative safe spaces", but where the above is not the rationally expected result of defending conservative ideas, is tiny. This is one of them. I can't think of any others offhand.
Maybe out of a selfless and manly commitment to the common weal, or disdain for the kind of snowflake who lets their fear of being called bad names keep them from exchanging ideas with those who disagree with them? I'm given to understand that conservatives value these things.
But I’d also urge you to consider that, just as liberal snowflakes misinterpret legitimate criticism of their ideas as “violence” and refuse to engage in reasonably constructive spaces like ACX, conservatives might have created a comforting caricature of academic speech norms as an excuse for preemptively refusing to engage with critics they subconsciously fear. Consider this philosophy paper by a natural law theorist arguing that eating meat is ethical:
https://www.academia.edu/download/37126644/meat-paper.pdf
then its citations on Google Scholar:
https://scholar.google.com/scholar?cites=11469417063371242225&as_sdt=5,39&sciodt=7,39&hl=en
Particularly this leading response:
https://vegstudies.univie.ac.at/fileadmin/user_upload/p_foodethik/Bruers__Stijn_2015_In_Defense_of_Eating_Vegan_in._J._of_Agricultural_and_Enviro_Ethics_.pdf
You can then decide for yourself whether this is a good-faith and rational response to conservative ideas, or a mere dogpile of political and personal vitriol that ignores the arguments made.
"In the left-leaning political groups I belong to, I'm always nagging my comrades to give moderate and conservative viewpoints their due."
Not that long ago, this used to be me. The name calling, strawmanning, complete unwillingness to see nuance, and often censorship were what convinced me, more than anything else, that conservative ideas had merit. The woke left worry that letting people hear conservative ideas would immediately convert them, but I was actually convinced by the censorship to give the underdog a chance, to stick it to the authoritarians who want to tell grown adults what they can or cannot read, hear, or say. I admit that's not very rational, but sympathy for the underdog used to be a left-wing instinct.
"I do think that "woke left" ideas dominate in academia in part because they're more correct, and therefore more fit to survive in a highly competitive memetic environment where people compete for professional status by mustering empirical and theoretical arguments to overturn others' theses and defend their own"
I'm in academia, and I can tell you that this never happens when it comes to SJ issues. 95% of the time, discussions of SJ issues are echo chamber where every idea is leftist to the hilt, and no idea is ever challenged. If you do raise a conservative opinion, even one that's obviously and demonstrably correct (e.g. "this hate crime wasn't actually a hate crime, according to the police and every news outlet, including the most left wing"), you get immediately dogpiled, accused of creating an unsafe work environment, and sometimes reported to HR. Campaigns will start to kick you out of the university, and if another institution offers you a job, to get that institution to retract the offer. People will ask your officemates whether they're OK--as if working in the same room as a conservative is a physical hazard. None of this is hypothetical. I've seen all of it with my own eyes. If you want a highly competitive memetic environment where people compete for status by mustering arguments to overturn others' theses and defend their own, and those theses have anything to do with SJ, you're more likely to find it in the neighborhood bar than anywhere near academia.
Funny you should mention—just the other day I facilitated a discussion in my left-leaning academic group about how we should respond to the officer-involved death of a trans fellow student at our university, in what the family claims but the authorities deny was a bias-motivated killing. Different views were expressed and heard out respectfully (including that the authorities’ account may be true or partially true!), compromises were proposed, the thorny issue of gender-neutral language in Spanish was raised and elegantly addressed, votes were taken, a rather moderate compromise statement was issued, and nobody was dogpiled or cancelled.
I was fairly proud of the process and result, as it does take a certain amount of tact and delicacy to avoid upsetting people when discussing sensitive topics like the violent death of a classmate, and sometimes even some tedious throat-clearing of the “everyone here can agree on the equal humanity of trans people, I just want to be sure we keep in mind that” variety. But I certainly wouldn’t give myself credit for pulling off a twenty-to-one miracle—these discussions are totally routine in our group and on our campus.
If you really can't voice a conservative opinion without upsetting people in the ways you describe, I'd encourage you to consider whether you might be wording things in an unnecessarily provocative way that makes people think you're more interested in iconoclasm than in seriously seeking truth.
"If you really can't voice a conservative opinion without upsetting people in the ways you describe, I'd encourage you to consider whether you might be wording things in an unnecessarily provocative way that makes people think you're more interested in iconoclasm than in seriously seeking truth."
I'd encourage you to consider that your experiences may not be typical. I'm glad that you were able to have a successful discussion where different points of view were represented, but I'm telling you that the academic environments I encounter are ruled by an oppressive orthodoxy. Rather than take my experiences seriously, you immediately jumped to victim blaming: "what did you do to provoke them?" (You didn't even blame the right victim, because in most of the cases I was an observer and not a participant.) Of course, it *is* always possible to cower in fear more and tiptoe more around others' sensibilities. But there's clearly a double standard here, because which part of terms like "microaggressions", "white privilege", "dead white men", "cultural genocide", or "whiteness" are not unnecessarily provocative? Labelling conservative speech as violence--that's not provocative? How about tearing down statues of Lincoln or Washington--that's not provocative? The whole basis of wokism is unnecessary provocation, mixed in with explicit discrimination against disfavored groups to boot.
Lastly, I'd suggest considering human nature. Even the purest, noblest ideology leads to authoritarianism when given hegemonic power. Jesus forgave the prostitute, turned the other cheek, and said the meek will inherit the earth; his ideology was used to justify burning heretics at the stake and killing millions of people in religious wars. Now, I'd be lying if I thought woke ideology was pure and noble, because censorship and discrimination based on race and sex have been repulsive to me since I was a kid. But even if it *was* pure and noble, human nature is not, and ever since wokism has gained hegemonic power, it has been used for power, greed, and status. Don't be so sure that you won't be the next victim. After all, most of the victims of cancel culture have been left wing, because most conservatives live in conservative bubbles where they're immune to cancellation.
Good luck and good bye, and of course you can always come back.
K thx bye
I'm pretty sure that Bitcoin's original purpose was to evade oversight to enable various crimes - including buying and selling illegal merchandise (drugs mainly) and obviously tax evasion.
I think the likely intend of the early bitcoin crowd (before people started using it as an investment) was to facilitate transactions without legal restrictions.
Depending on the laws, legal restrictions may or may not be reasonable. Many sorts of transactions are prohibited in some places: slave trading (certainly bad), buying toxins (probably bad), buying drugs (debatable), gambling (debatable), donating money to designated terrorist organizations (such as Daesh or Wikileaks), prostitution (I guess cash is easier than bitcoin there), buying medicine (probably net positive), buying contraceptives (good) or banned books or movies (probably easier to get using the tools commonly used to circumvent copyright than paying btc).
Like with the onion router network (which provides anonymous communication (or is supposed to anyhow, who can say how many nodes are run by the NSA)) or even encryption the question comes down to how you see the state. If you model a state as a generally benevolent entity which knows best and only prohibits stuff for good reasons, all such tech solutions are tools for evil criminals. If you either view your state as oppressive or are concerned about it becoming so in the future, your view will be quite different.
(While TOR is certainly used by dissidents and such, I concede that I have not heard many heartwarming stories about people buying contraceptives or sex education books using bitcoin.)
For the purpose of transactions, the long term price development seems of little relevance as neither party needs to hold onto the bitcoins very long.
Regarding Argentus claim that crypto give non-criminals a way to invest in the crime sector, I see two plausible interpretations. One could claim that investing in an asset used for tax evasion, thereby driving up the price (and in turn profiting from the demand created by tax evaders) is equivalent to investing in the tax evasion crime sector. The same claim could possibly made for other investments like buildings, art or gold, though.
Or one could claim that the various cryptocurrencies and tokens are pyramid schemes used to defraud investors. Claiming that the early investors who jump ship in time were investors in crime while the ones who stuck with it until the rug pull were victims seems a bit arbitrary to me, though.
? Meh? That’s it?
This post is bad and you should feel bad.
There's nothing scandalous there and nothing that deserves this kind of nonsense.
Because while the absurdity heuristic and the related offensiveness heuristic are poor ways to determine what is true, they are really good ways for people to coordinate meanness.
Because Scott is hyper-sensitive to criticism, even the kind of inane criticism you are offering here.
Wow that thread was a nothingburger.
There is a spectrum of positions that would fall under the term "human biodiversity". In the weakest sense, it says races have different average characteristics, which is obviously true (height, skin color, eye color, hair texture, etc). Then you have those who extend this to differences in average intelligence, temperament, or other mental characteristics. Then you have those who claim that those differences are significant in determining group outcomes. Then you have varying degrees of views that claim that makes some races somehow "lesser" than others. This last set of views is what I would call "scientific racism", and is indeed a terrible view to hold, but I am quite certain Scott does not hold it. I believe he falls somewhere in the middle two, though I'm not sure where. Importantly, believing in differences in group averages does not mean you believe those averages should be applied to individuals. That East Asians are shorter on average than Europeans is well-established, but no reasonable person would say that Yao Ming shouldn't be in the NBA because of this.
It sounds to me like you're calling at least the last 3 things HBD to include Scott, then turning around and calling only the last one HBD to impugn Scott as believing it. Or maybe you do have such a black and white view of the world that you think any belief in different average mental characteristics is no different from wanting to round up black people into camps. Either way, you do not seem to be acting in good faith. I may be wrong and you will actually engage with this comment. I would welcome that. But I can't help but suspect you are mainly trying to stir up trouble.
As a scientist, clearly race relates to intelligence. It is impossible that any two distributions even of people of the same race will have an identical mean(average). Accordingly, there is some shift in the average.
Also, as a scientist, I am well aware that that are both smarter and dumber people than me in all of those distributions regardless of race. Accordingly, I don't really care how big the offset in the mean is. That one race is less intelligent than another is obviously valid. Spending your time arguing that it is true is petty and pointless.
I find this in poor taste for both intent and failure of applying literalism to an obvious in-jest addendum
Leaking emails like this is not a good look either; neither is signal-boosting them. And the "horrible revenge" comment is obviously a joke.
That said, I don't see the mails as evidence that Scott is a horrible racist; rather, that he's willing to admit (even only in private) a very inconvent, but very likely true, fact.
If it's not evidence of racism, it *is* evidence that you should temper your trust of Scott when it comes to... well, any NRx-favored positions, inversely to how much you trust him to have a superlative ability to "filter out the garbage" - especially given his revealed priors.
This is extremely uncharitable especially given the actual text
I don't see how you can say that.
The actual text is that he thinks that some of what neoreactionaries say are "nuggets of gold", i.e. extremely valuable, and says he can filter out the garbage. Whether you, the reader, trust that Scott can do that filtering as well as he thinks without unconsciously adopting their garbage is kind of the whole takeaway.
The text also literally shows several examples of what he believes in or supports as a result of that filtering, giving you evidence to how good his filters are (or are not).
I think that's actually very charitable, if you think you can trust Scott's bullshit detector. The only reason to call it uncharitable is if you already think Scott has failed in the credulity department, at least in this category.
I don't trust Scott - or anyone else, for that matter - to filter any of my beliefs. Neither does any other mature, intelligent adult. The only thing I trust Scott to do is to examine interesting ideas with entertaining rhetoric. I don't have to trust his bullshit detectors because I have my own. This isn't a cult, it's a group of smart people who like good writing. It's a place for the reasoned debate of contentious topics. If you can't hear an idea without adopting it uncritically, then probably this isn't the place for you.
It sounds like you're triggered because Scott apparently finds merit in an idea that offends you. Why should anyone else care about your reaction? This is a community that values coming to common understandings of objective truths. If you disagree with the idea, then debate the idea. That's how a common understanding is generated. Maybe you're right, maybe you're wrong. Maybe both sides have something to learn. But the process isn't helped at all by insulting someone for holding an opinion that you don't. This community doesn't play political gotchya and it doesn't ostracize people for wrongthink.
The fact that Scott is able to find some merit in an ideology that he generally disagrees with only bolsters the notion that he's able to evaluate ideas on their own merit. I think it's to his credit, and your reaction doesn't reflect well on your ability to do the same.
Now, if you'd like to debate the ideology in question in a reasonable way, be my guest. But if you're just interested in insulting people for disagreeing with you, then this isn't the place for you.
It appears as though this comment is responding to something other than the comment it is a reply to. I'm not sure what part of my comment you're saying is an "insult", but your comment certainly seems to be full of barbs directed at me.
But this gets at one of my pet peeves, so sure, I'll take the bait.
>I don't trust Scott - or anyone else, for that matter - to filter any of my beliefs.
I'll be blunt, this is bullshit. This is a claim I hear a lot, and I've never been convinced by it. Everyone's internal models of the world are implicitly based on what other people say, because there's no other way to do it.
In order to accumulate knowledge, you need to (a) gather it from first-principals observation or (b) get it from someone else. And the vast, vast majority of it is going to be (b), because no one has the time or even physical ability to do otherwise.
Sure, you might not accept everything uncritically from everyone. But for many people, you're going to trust most of what they say is true, because you genuinely don't have the time to fact check everything that everyone says. It would be inefficient, and frankly *irrational* to do otherwise.
So unless you can honestly claim you've never acted on or shared-as-true a single piece of information you haven't personally fact checked from multiple sources (and if you did claim that, I'd call you a liar) you're going to end up incorporating things that other people say at face value. Especially so for people you trust to have intelligent, researched positions.
And in general, Scott *should* be one of those people that you incorporate at a high trust-value. He does a lot of work to make things legible, and is generally good at doing it with minimal bias.
(Scott has also personally touched on the topic of when to trust "experts" before, here: https://astralcodexten.substack.com/p/bounded-distrust and here: https://astralcodexten.substack.com/p/webmd-and-the-tragedy-of-legible)
>The only thing I trust Scott to do is to examine interesting ideas with entertaining rhetoric.
Cool, but that's not all Scott does. His articles are frequently in the form of persuasive essays trying to convince the reader he is correct. Hell, one of the most recent posts was literally a voting guide for the election, based on Scott's research opinions. (One where he says he leans "liberal to libertarian", and says that the closer you are to that the more valuable you'll find it; the degree to which he is actively hiding NRx influence is *very* relevant here)
Regardless of whether you think so, Scott frequently presents his opinion as trustworthy because he's done sufficient research from sources he believes are credible. (And once again, much of the time this is true! Certainly compared to most places.)
>I don't have to trust his bullshit detectors because I have my own.
Cool, part of the process of calibrating your bullshit detectors is determining how much you should trust others. I'm merely stating auxiliary calibration info.
>If you can't hear an idea without adopting it uncritically, then probably this isn't the place for you.
Yeah, it's not me I'm worried about. There's no "critical information literacy" test required to read Scott's blog, and he's an intelligent person trusted and recommended by many other intelligent people.
>It sounds like you're triggered
Cliche'd misuse of "triggered", minus one Quirrell point.
>This is a community that values coming to common understandings of objective truths.
A substantial part of the past two weeks has literally been debating subjective experiences, keep up.
> If you disagree with the idea, then debate the idea.
Scott has explicitly asked people to avoid debating HBD in his comment section before, so I'm not going to start on that. Suffice it to say, I and many others consider NRx beliefs on the matter to *not* be objective truths, certainly not to the extent that we'd say they're "right" about it.
>But if you're just interested in insulting people for disagreeing with you, then this isn't the place for you.
The amount of words your comment spends directing insults at me because I disagree with you, just to end with that, deserves an award in irony.
I think the underlying idea here is that people "unconsciously adopt garbage". It sounds like basic epistemic humility, but it's ill-founded: too vague, very different from "vulnerable to this list of biases and fallacies," and a fully generalizable thought-terminator that obliterates the ability to engage with anything.
Moreover, a writer that openly worries about that seems to be signaling that they doesn't trust their own judgement.
I don't see how you can conclude that since HBD is an empirical claim while racism is a philosophical one. While the two positions can overlap rhetorically, one certainly isn't a logical consequence of the other. It isn't at all inconsistent to say both "average IQ differs between racial groups" and "all racial groups have equal moral and legal value." To claim otherwise is, in fact, a racist position: it makes the moral equality of racial groups contingent on empirical facts that may turn out to be false. If you can only accept someone as your moral equal if they're as smart as you ... well, I wouldn't call that a morally defensible position.
Oh no, a writer writes words to attract an audience.
Next you'll be telling me that restaurants decide their menus based on what people want to eat.
No one else does.
I assume that also means you think he's left the internet forever.
Jealousy applies to more than just sexual jealousy, and it's fairly clear Brennan was jealous of Scott, hence the nasty little email leak. After all, nobody was writing NYT articles about *him*, good bad or indifferent!
As much as I hate the plastic straw bans in general (and personally think they're the result of fallacious reasoning[1]), I think the problem is not that plastic straws are superior, but rather that most eating establishments pick the absolute cheapest and therefore shittiest paper straws. There are higher quality paper straws that don't suck (or rather do suck in the literal sense, unlike the cheap ones after five minutes), and other non-paper options, and we should be shaming any business that cheaps out so aggressively. It's also a supply-side problem - plastic straws had decades to get the unit cost down to pennies, we need straw manufacturers to come up with something decent and affordable that doesn't contribute to worsening the environmental plastic crisis.
[1] My personal conspiracy theory is that most of these bans are based on studies of "plastic recovered from beaches" (which I have seen on several websites campaigning for them), in which case you have a kind of survivor bias - straws are a relatively large piece of plastic, and much easier to spot than smaller bits of plastic waste, but smaller than the most obvious waste (like bottles) that would be collected before more detailed cleanup. Therefore it's over-represented in those studies, in a form of survival bias.
How frequently does one really need a straw (any kind of straw)? Maybe boba is a use case.
I wanna say the answer is plastic straw sit-ins; get a bunch of people together to fill a restaurant, and they all bring plastic straws. Or go the other way; get a bunch of people together and go full Airplane!, just splash all the liquids near your face, then melodramatically lament "if ONLY there were a better way!" Everyone in unison; you'll probably have to practice. (Probably bring a mop too, that'd be a pain for staff to clean.)
Maybe you could get everyone to bring small PVC pipes and use those as protest straws. Pinch one end of the straw shut and argue that means it's not a straw anymore, it's a cone. Get your dirtiest coat and pass out plastic straws in back alleys.
Even better! Think how much longer you can keep up the gag if it's not even illegal!
In today's culture you've got to invert it and get paper straws banned. #activism
It used to be done by pointing out that paper costs significantly more energy to make than plastic, which is true, and that paper mills usually have more environmental impact than plastics factories, which is also true.
The plastics industry tried this many years ago, when people were suspicious of the change from sturdy paper grocery bags to cheap plastic grocery bags that readily tore and dumped your carefully packed cans of frozen OJ, cauliflower, and perfectly ripe peaches all over the freaking grocery store parking lot, raising one's blood pressure 20 pts in an instant. It was easy to persuade the stores, because plastic bags were cheaper and far easier to store and transport, but shoppers were pretty resistant. The ol' "Paper or plastic?" refrain which those of an age will recognize as the punchline of many jokes, was an attempt to do a little prodding to see if the next victim...er...hapless customer could be persuade to take one for Team Gaia, if not Team Albertson's/Kroger/Safeway Bottom Line.
Never worked, though. Nobody could ever bring themselves to believe that plastic was better for the environment than paper, inasmuch as paper is made of trees and what could be more natural[1]?
So the paper just inexorably disappeared, and to its credit the plastic got a smidge better, and we all just got used to accumulating all these fucking flimsy plastic bags of no use whatsoever. (The traditional fate of paper grocery bags was to cover next year's textbooks, be cut up for arts and crafts, be used to store quantities of paperbacks being donated to the library When I Get Around To It So Stop Bugging Me OK? and lots of other things I have now forgotten.)
-------------------
[1] Soylent Green bags, of course, but we haven't quite got that far yet.
I prefer *bagging* into paper bags, because the rigidity helps prevent products from flopping all over the place*...plastic bags work great for spherical cows, less good for anything with angles. They also tend to have less cubic capacity just in general. But for second-lifing, plastic makes a superiour trashcan liner. Can't remember the last time I ever actually bought separate garbage bags. (Paper bags are OK for compost, as long as nothing's too wet anyway...) But of course I know most of those plastic bags are gonna wrongly end up in a recycling bin, because bad heuristics, whereas the paper ones could end up in any bin and it's not an issue. Tradeoffs abound, as in all things.
WRT government mandates: the regressive "bag fee" tax in SF is 25 cents. (Waived for people paying with foodstamps, because #equity.) I guess the goal is to...punish people for using environmentally-optimal disposable paper or plastic bags, and coerce them into buying aspirationally-reusable ones? Which need anywhere between a dozen and a few hundred uses to be equivalently green? And that's why we sell them in a wide variety of fun collectable colours and patterns and seasonal designs? I'm too lazy to actually complain to my local legislator about this Obvious Nonsense, but finally got to the frustrated point of at least never bag taxing anyone who comes through my line. The best defense against unjust laws: putting enforcement in the hands of people with no incentive to do so.
* (The grocery chain I work for does paper by default; we didn't even offer plastic at all until Christmas a few years back, when there was a recycled-paper shortage, so then whoops shitty long-term contract with some janky plastic bag manufacturer that makes godawful weird-staped ones. Hence why our plastic bags permanently have Christmas designs.)
Handles are definitely a big selling point, and yep you're right the plastic has improved considerably -- although now the gummint mandates the store charge me $0.10 each for them, so I carry a string bags like a clochard.
Ah, but you see the government *has* to force these decisions on you, for you are a bumbling child who would slay Gaia in the stupendous depth of your ignorant cupidity, except at election time, when so long as you vote for more of the same instruction from your betters, you are the very font of reason and righteousness.
Always found it curious how some businesses went the Cass/Sunstein "nudge" route of mild stick punishment for "buying" a bag, whereas others went the route of mild carrot reward of a "discount" for BYOB (of any type). Taxes and fees seem much more common, because gubmint loves its political-winner free revenues. But I wonder if it's actually more effective at the stated goal of discouraging disposable bag usage.
Dude, you're following a bad news source that's promoting outrage bait about the right. That's all. This isn't a real thing.
I'm pretty sure this is just nutpicking, plain and simple. There are something like a hundred million Republican voters in the United States, and thousands of Republican politicians; finding a few who are foolish enough to advance such a proposition is meaningless. There may be threats to our democracy going forward, but this isn't one of them. It isn't necessary that we drive the number of people expressing such foolishness to literally zero. And it isn't persuasive to argue that since [other party] has a few loose nuts, [other party] must be driven into the outer darkness.
>significant minority
Significant to whom? Like, are these relevant party actors?
I have heard about "Qualified Voting" from both sides of the aisle in several countries, but unless there is a real effort backed by credible people, I'm not going to make broad moral condemnations
Counterpoint: we are increasing the age at which people are allowed to do a lot of other things. You have to be 21 to buy alcohol, cigarettes, buy a handgun, be an airline pilot (technically, get an airline transport pilot certificate), do interstate trucking, etc.
You also need to be at least 19 to get a Coast Guard captain's license.
I'm sure that I've missed a good number of other important items.
And with the standardization of post-secondary education to enter the labor force, people at 18 are also unlikely to be able to be self-sufficient.
Yup. If your brain is too underdeveloped to safely buy a gun or alcohol, why should that brain be able to choose someone to do violence on their behalf?
You certainly don't have fewer rights as a young person -- indeed, the state if anything prosecutes crimes against youth with greater vigor and attention than crimes against adults. But your responsibilities are circumscribed, and your rights exercised on your behalf by someone older, usually your parents.
The thesis is that you don't have sufficient judgment to exercise responsibilities...well, responsibly. So we say you cannot sign binding contracts until you're 18, and the responsibility to get educated, to take care of your health, and not be a social pain in the ass are exercised on your behalf by your parents. Almost all of this is designed to protect you against the folly of your own stupid decisions. Voting is no different: we want to protect you against the consequences of voting for the Pied Piper or Wicked Stepmother with the poisoned apple.
It's also true we want to protect *ourselves* against the consequences of your stupid decisions, where it could affect us (e.g. voting), but this is the secondary concern -- the primary reason we limit child responsibility is to protect the child.
We dislike varying ages for marking the transition from purely practical considerations. As you point out, the actual age of maturity can vary widely between individuals. I know 15 year olds I would trust to drive, fly a plane, or vote, and 25 year olds I would not trust to walk the dog. But we can't be doing tests on everyone, lacking the cheap Maturity-O-Meter the county clerk can just point at any individual to take a reading. So we set arbitrary age deadlines that work on average, and that's good enough. Indeed, we still argue over whether the deadline even on average ought to be 18 or 19 or 21 or 25 or something else -- e.g. there used to be, at least, states where you could get a driving license at 14, because kids on the farm needed to be able to drive pickups around.
The conservative position is friendlier to restricting youth franchise simply because youths tend to vote for experiment, and conservatism by definition does not think experiment is as valuable as progresssivism. Nor does conservatism think that participation in government is as important to the individual as does progressivism, because conservatism doesn't value collective actiivities as highly, and doesn't think government should be as significant a presence in life anyway.
The conservative-leaning places I visit tend more towards "We don't need to change our position. We just need to distance ourselves from Trump."
A the most vocal part of the GOP right now is people who built their political careers on acting like victims and the media (on both sides) loves to cover these people. Its hard for me to understand how much this actually reflects the feelings of republican voters as a whole. Likely very few of them care or have heard of this kind of thing given that most voters (of all parties) are much less educated on specific candidates than we would expect or want.
Oh come. Let us not be naive, or transparently tribal. When was the last time any politician, of any stripe, sat down after a loss and said "gee, I guess I need to become more like my opponents" instead of "God damn it, we wuz robbed by [insert random conspiracy/bad luck story here]! We just need to double down on what we were doing before, it will surely work next time..."
Since the ol' pendulum reliably swings, this theory has the virtue that if you wait long enough, it will surely work out successfully, and for a politician (or political party) it's a lot easier to just wait a cycle or two (collecting cash from your outraged supporters all the while[1]) then it is to say well gee everybody, I guess everything we stood for was rubbish and we need to do more of what those people we called low-down good for nothing rascals last time were suggesting. Denial is a thing, and it often works well, and is easier than reconsidering your entire philosophy o' life. I mean, there's a pretty impressive level of denial going on right here in this discussion about the nature of the crypto biz. But it's easier to rationalize how it was a one-off, a bad apple, bad luck, et cetera, than going back to square one and saying holey moley maybe I was wrong all the way back there.
When the Democrats lose, they're not any different about reconsidering the current norms to their advantage, while arguing fiercely all the way they're just leveling some playing field or other, or redressing some inequity, totally has nothing to do with any basic primate urge to be on top, e.g.:
https://thehill.com/homenews/house/3564588-house-democrats-offer-bill-to-add-four-seats-to-supreme-court/
Reconsidering the size of the Supreme Court because you're pissed about who got seated lately isn't materially different[2] from reconsidering the Twenty Sixth Amendment because you're pissed about how many 18-21 year-olds voted for your opponents. And, fortunately, they're both equally DOA to anyone with a lick of genuine sense, and so serve mostly as a way for True Believers to indulge in revenge pr0n fantasy.
----------------
[1] A common error is to think political parties exist to acquire power. They certainly love doing that most of all, but they exist to make money and provide jobs for their adherents, and if they can do that in the minority, that's not such a bad thing at all. Indeed, it has certain advantages, since free of the responsibility of actual governing you can be a lot more strident in your righteousness.
[2] I am of course aware that anyone who scored highly on his verbal SATs can provide a closely argued rationalization for why they are like TOTALLY different, and only an imbecile such as myself could fail to see that.
I disagree about the court. Control of the court goes to whoever has the votes and the determination to use them. Once you have both, do what you want. Call it the McConnell rule.
Yeah it is. Extending or contracting the franchise is only one of very many ways to influence the path that power takes from The People to Our Guys In Office. If you single it out as an especially sacred totem to defile, then the a priori most likely reason is because it happens to be your tribe's totem. Other tribes get upset at threats to different totems.
I mean, just for example, the Red Tribe sees red at the notion that people who are not bona-fide citizens who are 100% willing to prove it at the polling place with a Real ID should be allowed to vote. We can probably all agree that *in principal* non-citizens shouldn't vote, right? But notice only the Red Tribe gets really upset that that particular totem might get defiled by, for example, lax vote-by-mail or same-day registration.
Edit: I realize I didn't really try to answer your question as best I could, and given that you were willing to engage I should. So: my best guess as to the underlying origin of this is a sense of enormous frustration coupled with denial. The frustration stems from seeing The People, bless their hearts, be just unable to comprehend the obviousness of the true threats to the Republic, despite being told them over and over again, and experiencing much pain as a direct result of them. Jesus (the thinking goes), the 70s are back with a vengeance, stagflation is destroying lives and the children are turning out stupid and needing therapy -- how much more motivation do you idiots need to Right the ship? (For a good example of the same sense of baffled frustration among the Blue Tribe, consider gun control. Jesus, how many of your own kids have to be shot to death, accidentally or on purpose, before you morons clue in?)
The denial part is being unable or unwilling to look seriously for what reasons might impel a reasonable man, who differs from you in only modest ways, to disagree so remarkably on which are the true threats, and which are the better ways to ameliorate them.
I think both come from the 50/50 state of affairs. If one were always in the 75% majority, there would be no frustration. If one were always in a 25% minority, there could be no denial. It's being...right...on...the borderline of being able to feel yourself in general agreement with the nation that drives both miseries.
So people reach for drastic remedies (because of the great frustration), but remedies that are simple-mindedly tribal (because of the denial). There is neither a respect for traditional norms (because of the frustration), nor a willingness to look for more nuanced solutions that might require mutual adjustment all around (because of the denial).
If your major point is to express disgust at the level to which some serious number of people are willing to indulge their narcissistic fantasies, above a hard-eyed evaluation of the real world, I agree with you 100%. It is a sad betrayal of the magnificent edifice of self-determination and individual liberty our ancestors left us (at great cost to themselves). We have lost a great deal of the ingredient *we* have to supply -- which is self-discipline.
I'm just observing somewhat snappishly that the Red Tribe holds no monopoly on major chunks of it acting like self-indulgent twits who take the social contract dangerously for granted.
I take those calls to raise the voting age to 25 as seriously as I do the calls to lower the voting age to 16 or lower. (This lot want to bring it down to 12 and they're *philosophers*, we should all be impressed right?)
https://www.cambridge.org/core/journals/royal-institute-of-philosophy-supplements/article/abs/radical-democratic-inclusion-why-we-should-lower-the-voting-age-to-12/D28B4C199A2CDB2AD903618CB8D3473A#
Which is: a bunch of interested parties think that by doing so they can turn out more voters for their side. The Democrats are just as bad on this as the Republicans, and it's not uniquely American; I've seen similar calls in my own country (how much this is the usual suspects just copying the Yanks and how much it is 'young people are progressive, they'll vote Us in to do progressive stuff' I can't be bothered parsing).
"But the fact that this is a thing at all in a significant minority of the Republican party is quite baffling"
Oh gosh, those naughty Republicans! Being supportive of the idea of changing the voting age!
Oh wait, this bunch are Democrats:
https://www.washingtonpost.com/graphics/politics/policy-2020/voting-changes/lower-voting-age/
https://vote16usa.org/reasons-for-lowing-voting-age-16/
https://fairvote.org/archives/why_should_we_lower_the_voting_age_to_16/
Now, you can quibble about "yes, but these are not the *party*", but they're just as involved in political exhortation as politicians so that's hair-splitting to my mind (if you're boasting about getting "We helped bring RCV to the 2020 Democratic primaries, where five states used it to select presidential candidates" and talking about your " ever-expanding network of state, local, and national allies" then you're not just plain John and Marjorie Citizen).
There's a fundamental difference between giving people rights and taking them away. Expanding rights may be foolish, but removing them is cruel. This isn't a "both sides" issue
Think that one through with respect to the right of one man to own another, which was abolished by the Thirteenth Amendment.
Maybe a better way to look at it is whether a rearrangement of rights is positive-sum, negative-sum, or zero-sum. From a certain point of view the franchise is zero-sum, but I think taking into account more realistic considerations from psychology, political science, etc., expanding the franchise is *usually* positive-sum and restricting it is *usually* negative-sum.
Who says? If you ask me, expanding the franchise is always negative sum. I don't hold with the proposition that all men are equal (although they may well be born that way), and that there is identical value in every man's vote. Once you've got the people who everyone totally agrees should be voting, adding marginal cases is pretty much bound to dilute the average quality of the vote, and you'll descend to bread 'n' circuses, followed by tyranny, the way democracies traditionally do.
Given my druthers, I'd go back to the days when only 1 in 50 people had the right to vote, maybe fewer, and make people compete for it. All-day exams of extraordinary difficulty, being able to speak 3 languages fluently and play the oboe, maybe get through a deadly obstacle course without losing more than one limb, gladiatorial and/or chess combat to decide ties.
If you're going to put ultimate power in the hands of voters, which a republic does, they ought to be the best hands you've got. It's logically absurd to expect the average schmo to be able to consistently elect above-average political leaders. If you want above-average government, you're going to need above-average voters.
There might be some argument to be made for a truly elite vote-of-the-technocracy, but in the US of A we started out with basically all the white male landowners, so it's not like any of the expansions of the franchise were starting out from a position anything like that. I continue to maintain that for *most* distributions of the franchise, including all the ones we've tried so far, expanding it is positive sum by virtue of increasing fairness and inclusion in a system that already can't count on voters' expertise. I mean, the white male old landowners managed to end up with Andrew Jackson, and following that, a slide into civil war. Nothing that bad has happened here since the franchise started seriously expanding!
I'm in favor of lowering the voting age to 16 or even (?!) 12 but I've been in favor of that since I was 16 myself, without interruption, and well before I was voting D. I can remember being disenfranchised at those ages. It mattered. It wasn't a good thing. I really think we should go even younger than 12, and it's not nearly as clear that it will help the Ds at that point.
Well, the 2016 presidential election resulted in Congressmen calling for electors to ignore their state's elections and vote Hillary in anyway (which backfired wonderfully when electors instead refused to vote for Hillary), and lots of internet commentaries about eliminating the electorate and/or the Senate. It's just the nature of politics these days.
Pretty sure it's down to the Internet putting the world in contact with all the worst people, and politicians still mostly being used to the pre-internet days when opinions had to reach critical mass before they got broadcast and archived. Hopefully people will eventually realize how the internet works and future politicians will learn better than to chase the approval of the loudly insane.
Fortunately, calling for voter restrictions is dead on arrival; you'd need a majority to pull anything like that, and people calling for it are only doing so because they don't have the majority. The gates will only ever open wider.
(It would be hilarious if they raised the voting age to 30; you only have to be 25 to run for Congress.)
Since the 18 year old voting age is in the Constitution (26th Amendment), you'd need a lot more than majority support in order to change it.
I agree. I think its pretty weak, intellectually, to see this incident as an indictment of EA. I am not an EA enthusiast and this doesn't change my thinking on the subject. Many were so quick to "blame" EA. Seems like you could just as easily use this to say "people with autism shouldn't run companies". Of course no one would or should say this but its just as valid as the EA criticism!
I wouldn't be surprised if someone said that. It's not even necessarily wrong, if we're willing to broaden the overton window quite a bit, but that's not a conversation I or many other people want to have.
Obviously this doesn't discredit the entire concept of effective altruism, but it does point towards confirmation of a few red flags I've been noticing in the broader EA culture the last few years, including the too quick trust in anyone who displays the right subcultural affiliations, and the overabundant enthusiasm for crypto and financial stuff generally.
The modern EA/rationalist/postrationalist status feels overall too much like a social scene for my taste, tied up with individual personalities and the associations between them, when I feel like the original concept was more about abstract impersonal rules for thinking and acting better, which could have been applied in any society past present or future.
Ironically though, that concept appeals to a specific type of person, so in retrospect I think it was inevitable that a relatively homogenous network of people was going to spring up, and to a large extent that's necessary anyway if you're going to coordinate anything effective. And with that comes the inevitability of at least a few bad actors, no different really from Enron or Theranos or a hundred other business ventures gone wrong, so you may be right that this doesn't actually say much about EA in the big picture.
>Many were so quick to "blame" EA.
I think people in EA circles are seeing this a lot more than is really out there. From the mainstream financial and crypto press I am following everything is about crypto and its issues and there is zero mention of EA.
Yes this is definitely true. I probably wouldn't have seen the EA angle except i follow this blog and sam altman on twitter and a few other people who would comment on it.
I don't know the people involved well enough to make any judgements on their moral characters. That they seem to have been Ye Olde Bay Area Rationalist Sub-Culture Types inclines me to the notion that they really *did* believe in EA and all the other do-goodery.
They also had all the faults and flaws of Ye Olde Bay Area Rationalist Sub-Culture types, and I think that is what contributed to the mess. Bankman-Fried got a lot of incense burned before him in online articles and elsewhere, and such flattery is very sweet. Are we surprised he started to believe his own hype and that he couldn't do wrong? That he had the Midas Touch? (That the touch of Midas is a *cautionary* tale seems to pass many people by).
The story of Chu-bu and Sheemish is instructive here, I think:
https://www.sacred-texts.com/neu/dun/tbow/tbow15.htm
+1
The price of the Ponzied asset is already a prediction market on its future value, which incorporates the probability that the asset will collapse in some financial scandal.
"We are what we repeatedly do."
I like the original quote especially for the way it resonates with the modern understanding of neuroscience, habit formation, dopamine, etc. It reminds me that there are real physical consequences in my physical brain of repeated actions.
This is so important. Transforming a reflective "am I doing a good thing today" into "I'm doing a thing to then do a good thing today" introduces a weighting problem and fuzzes things up really fast.
Layer in a bunch of wordsmithed justifications about maximizing giving and I feel like we need a math formula because things have gotten really indirect.
I'm really concerned that this just straight up doesn't work at scale.
It's probably a step worse than that.
You get free status points for the Y that you're not focused on.
I'm gesturing at the Efficient Market Hypothesis here; I don't think there's a military strategy equivalent.
The EMH is false?
How many people in the Western world predicted at the outset of the war that Russia would lose its fleet though? If there were armchair admirals who knew better they were pretty quiet, and most laypeople took it as a matter of faith that Orientals would be soundly beaten by the Tsar’s Christian military. So doesn’t that support Scott’s argument?
Generally you can't just get into running a navy, so even if you know something Tsar Nicholas doesn't you can just demonstrate this, but it is actually reasonably possible to start up a new hedge fund and have your results eventually speak for themselves. This is a special quality of finance, which is why there's a special efficient markets hypothesis, rather then a general efficient everything hypothesis.
So start a ten thousand dollar hedge fund to prove the strategy works.
The fact that I let $100 in cash sit in my bank account the last month or so, ROI = 0%, instead of investing it with FTX, ROI << 0, is already prima facie proof that I'm a smarter investor.
When other people are racking up $millions in losses, you don't have to build a +$10 billion business to prove you're smarter, you just need to not fuck up as badly. Pretty low bar to clear.
It sounds like you don't really know much about hedge funds.
Yes, it is a fully generalizable argument to all criticism of authority. I've also heard people claim that EA never claimed to have special insight into how to run charitable organizations. Lots of defensiveness.
If you can outpredict the market on which companies will increase/decrease in value, you can accumulate outsized returns with no additional resources needed.
If you are a better general, what equivalent action can you do to get an advantage? I don't think there is one.
This seems like a genuinely important distinction. If someone claims to have really known what was going to happen to FTX beforehand, I would be most persuaded by them showing me how much moneh they made by shorting it.
There is no reason to believe someone who knew FTX was probably fraudulent would have made money off its collapse. This is because the ideal point for shorting the stock is not when FTX became a fraud but when the fraud is uncovered. I freely admit I do not have a great skill at predicting when the regulators or general public will become aware of frauds. This is an entirely distinct skill from determining what is or isn't fraudulent.
I agree that timing considerations could have made it harder to make money off of this.
It's still true that it is an opportunity to make money, and so if someone claims to have had this information but did not short FTX we should update towards them not being as confident in the past as they claim. This is especially true since if you were really certain about fraud, you could cite the evidence that made you worried publicly, and thereby kick off investigation.
There isn't an equivalent opportunity for criticizing military expertise, so conflating them seems innacurate.
I just explained why shorting behavior is a poor proxy. Shorting requires you not to know it's a fraud but to predict the specific time of collapse. If you short and the stock collapses but on the wrong day you can still lose money. Knowing something is a fraud does not imply knowing when it will be discovered.
So no, you shouldn't update that way.
I agree with the idea that it's fine to take shady people's money to do good things; I get really really *really* annoyed at the "Trusted Entity is secretly funded by Evil Entity!" brand of outrage-journalism.
The key differentiating factor in this case is that it turns out it wasn't really *his* money. And that was...not entirely unforeseeable.
I normally have some trouble explaining the heuristic I use to decide that someone is untrustworthy, but in this case it's super easy: every crypto project is a scam, everyone in the crypto space is trying to con "investors" out of their money, everything they say or do is at least partly in service of that goal, and anyone who controls other people's crypto assets is embezzling them. If you frequent crypto spaces - not just now after this scandal, but at any time since Mt. Gox went down in 2014 - people will literally tell you over and over that you should use these heuristics. They're astonishingly open about it.
(I have some theories about why crypto culture is like that, but I think they're a bit much for an open thread comment.)
Perhaps a good intuition pump: these are by and large people who loudly proclaim that banks and existing financial systems are fraudulent. When they set up their own banks and financial systems... They by and large are likely to feel that all their competition is fraudulent. This is a really sticky trap to engage in fraud.
Obviously not every cryptocurrency person is motivated like this or behaves like this.
I actually think the causality is mostly in the opposite direction for the major figures in crypto culture (typical mind fallacy: they think everyone else is defrauding everyone all the time because they want to defraud everyone all the time) but I'm sure there's a feedback loop, plus some disinhibiting effects of normalizing bad behaviour and blaming victims.
And you're right: not everyone in crypto is like that. (I'm not like that, although you shouldn't believe me.) I'm just pretty sure that for any decision with real financial/legal/physical consequences, it's best to assume that anyone who wants you to exchange money for crypto or to turn over control of your coins is trying to steal all your money, destroy your reputation, and implicate you in multiple felonies.
But the problem is that someone can reword all those cautions to be about "what I'm doing is not unethical, it's high risk but it's justified by these paradigms because the projected return is so great and the utility will be so maximised!"
Nobody ever thinks their schemes are going to go bust, they expect to make big money. Like betting systems that won't fail, or 'if you lose at the racetrack, keep betting so you will get back the money you lost'. If Bankman-Fried had been merely a common fraudster, he would have had some plan in place so he could have scooted before more people realised the dominoes were tumbling down.
Even among high risk options, there surely is a separation between ethical and unethical ones. Buying a big stake in ethereum hoping it goes up certainly seems ethical, though not particularly wise.
This is just deontology, isn't it? How do you define unethical in a consequentialist system without reference to the consequences? If the consequences are referenced and the arbiter of whether an action is moral then how do you not end up with the ends justifying the means? These aren't impossible questions. But "every individual step must be moral" is just deontology, where the morality of the actions matter more than the consequences.
In rule-consequentialism, you don't refer to the consequences of a specific action, but you do refer to the aggregate consequences of consistently following a proposed rule.
(Depending on the version, you may refer to the aggregate consequences of *everyone* following the same proposed rule, at which point the line between rule-consequentialism and deontological ethics starts to get very blurry and lose most practical relevance.)
Morality in rule-consequentialism means consistently following your established rules even when you believe that breaking them would produce a better outcome in a particular instance. This is still a consequentialist attitude, just at a higher level of abstraction and a longer time scale.
Is there a relation between rule consequentialism and virtue ethics? These established rules sound a lot like virtues.
Rules consequentialism does not involve "acting ethically in all cases" though. That's deontology. For example, rules consequentialism could have an unethical rule that has positive social effects. If every action must itself be ethical that's deontology, isn't it?
If consistently following a rule has net-positive consequences, then the rule is ethical by consequentialist standards.
If you hold all your rules to deontological standards, then yeah, you're kind of doing deontology(ish), but that's a bit tautological.
And there's a lot more to deontology than "every action must itself be ethical." That description applies just as well to act-consequentialism, most versions of common-sense ethics, and divine command ethics. The distinguishing features of deontology are (1) the specific algorithm for determining what action is ethical (https://en.m.wikipedia.org/wiki/Categorical_imperative) and (2) the focus on the reason behind the action.
(In deontological ethics, an action is only ethical if you do it *because* it's ethical, out of a sense of duty. Doing the right thing for the wrong reason is immoral, and concern for the consequences is the wrong reason . So it's impossible for a consequentialist to *actually* practice deontological ethics even if they always pick the same action a deontologist would. And even picking the same actions is only plausible if they adopt one of the universalizing forms of rule-consequentialism, which are unusual.)
> And there's a lot more to deontology than "every action must itself be ethical." That description applies just as well to act-consequentialism, most versions of common-sense ethics, and divine command ethics
No, it really doesn't. Your definition of act-consequentialism doesn't say every action must itself be ethical but that the results of every action must be ethical if you imagine it applied universally. Deontology says that there are moral rules that directly inform actions regardless of consequences. Divine command ethics say that morality is known from divine will which is similar to deontology but distinct in that divinity can change the rules or be vague in ways deontologists disagree with.
> The distinguishing features of deontology are (1) the specific algorithm for determining what action is ethical (https://en.m.wikipedia.org/wiki/Categorical_imperative) and (2) the focus on the reason behind the action.
The categorical imperative is basically your definition of act-consequentialism though. The rule is literally "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." The reason it crosses over into deontology is because of its insistence each action must itself be moral rather than just the results. It's also a specific attempt to fuse consequentialism into a more deontological framework so it's a bad example of core deontology.
>In deontological ethics, an action is only ethical if you do it *because* it's ethical, out of a sense of duty. Doing the right thing for the wrong reason is immoral, and concern for the consequences is the wrong reason.
Yeah, no. You have a weird definition of deontology that isn't widely held or in the link you posted. Deontology doesn't care about intent. Deontology is about the application of moral rules that universally apply at all points. For example, if you believe that stealing is wrong then deontologically stealing from the rich to give to the poor is wrong even if you agree the consequences are good. Act utilitarianism, meanwhile, might say what if everyone steals from the rich and gives to the poor. And then you might end up with an act utilitarian saying: Well, what if we regularly stole from the rich in fixed amounts and gave it to the poor. That would increase net utils while being predictable enough so as to not destroy the wealth we're stealing from. To which the deontologist would just reply: stealing is wrong. It's wrong regardless of how much you mitigate its effects.
This specific distinction is highly relevant form this FTX stuff.
> So it's impossible for a consequentialist to *actually* practice deontological ethics even if they always pick the same action a deontologist would.
This implies there's no meaningful difference between consequentialism and deontology except in the minds of people. Which isn't true or what most people think.