The person you describe as knowing sounds awfully like Caroline Ellison, in which case, let me say that I refuse to believe that she could have acted on bad faith until I am given overwhelming evidence to the contrary. On the contrary, the impression I got is of a true believer, and a good person. This does not preclude the possibility that, under circumstances of a certain naiveté and inexperience in a field as murky as crypto, she might have let herself go along with what she might have perceived as temporary and 'bad' expedient means. But to believe this person ever intended to purposely and maliciously scam people our of their money or be privy to a fraud is, for me, completely out of the question. I believe the best option is to be charitable and await to see what the courts of law have to say once the dust has settled.
Here's the rub. Bankman Fried and his woman look like goblins. Human beings instinctively recoil from goblins. Rationalist utilitarians say 'no, there's no rational reason to recoil from people who look like goblins'. But there is.
Now, it's theoretically possible to construct a version of utilitarianism that would be sufficiently inclusive of both heuristic rules and the dark, dark secrets of HBD and psychology. But the problem is that it would be very complicated and the whole point of utilitarianism is to simplify morality. So, in practice, rationalist utilitarianism always ends up saying some version of 'no, there's no rational reason to recoil from people who look like goblins'. But, to reiterate, there is.
I feel like the SBF/FTX thing demonstrates an understated flaw in the logic of effective altruism: that the movement by orienting itself around large dollar philanthropy is dependent on individuals who are fundamentally incapable of meaningfully altruistic behavior. I remember Scott at one point criticising the idea of a leftist revolution on the basis that the individuals and structures that revolution would produce would never be equality maximising but instead power maximising as per the demands of revolutionary violence. Couldn't the same argument be made that the kind of personalities and structures that accumulate enough wealth to give the kinds of sums EA depends on are fundamentally antithetical to meaningful rational or altruistic behavior.
Reading this week’s Douthat column I kind of regret rage canceling my paid subscription to Scott’s Substack. “You won’t get a refund said Substack. No more hidden OTs for you.” “I don’t care!” I replied.
I am curious how Scott will address the appeal for a bit of perhaps less than maximally effective altruism tho.
I need advice for a friend who is in ungodly amounts of pain. I am thinking about the SSC article on pseudoaddiction- miner who takes opioids for years for horrible mining injuries speaks brusquely to hospital staff, gets his opioids taken away, shoots himself in the chest, miraculously survives, etc. My friend has been taking opioids for 6 years after a horrific car accident, and their doctor is threatening to take them away. What should they do?
I may post this again, but I wanted to get it said. This might be an added explanation for research slowing down, but I don't know whether it's as bad in the sciences as it is in the humanities.
The short version is that it's not just bad at amazon and google, search has become relatively useless at academic sources.
Here's how I got past a cataloguing issue. I'd heard for years that things had gotten better for years for Jews in Germany, especially in the Weimar Republic. I realized there was a story there, but what was it?
Searching on Google didn't help. I was just getting anti-semitic stuff.
I think it was a couple of years, and google changed its policies. Now I was getting stuff *about* Nazis. They're more interesting than gradual legal change.
Finally I asked people. I got pointed at the emancipation of the Jews and a book called The Pity of It All.
The moral of this story is that you may have to ask people because computer search isn't working. It's like being in the middle ages or something where local and specific knowledge is the essential thing.
Anybody know why DSL is inaccessible?
So, what's up with media being so nice to SBF? (https://twitter.com/loopifyyy/status/1592944362274816000?s=46&t=QYFASLmu7f_nv9WfJkguFA). What's the underlying cause? Or is the premise false and these articles are cherry-picked?
Do people think SBf started out intending to run a scam? If not, approximately when did he start running a deliberate scam?
Excuse me if this has been brought up already.
I would be inclined to update to mistrusting people who talk a lot about their own virtue.
I know the issue of 'use cases' with crypto has been beaten to death ,so I apologize in advance for the redundancy, but I would like to ask the smart people on this chat this: Is crypto the first example of an innovation/commodity for which the garden variety champion cannot explain the 'use case' to the typical rube?
Self disclosure: In this context (and others, without question, but those aren't relevant here) I am the rube. And I have read all manners of interviews with the likes of SBF and the desperately malnourished kid who started Ethereum, and whenever the question of 'so what is it really good for' comes up we get the inevitable 'that's a really good question!' (to all who have been to an academic conference, feel free to laugh with me!) and then a bunch of 'blah blah blah decentralized blah blah blah' and we move on to the next question.
So- I'm not here to argue that crypto 'doesn't' have a use case, because it absolutely might and I can see some distinct paths where it does. But in terms of explaining it to the average guy, I think it's fallen laughably short. Whichmy eye at least is an interesting feature of this commodity, since I can't think of another example of an asset that has this unique property.
Am I wrong? Have there been others? If not, is this an augur for what's to come (i.e., more assets that end up worth more than the GDP of Brazil but that nobody can clearly explain how they will improve our lives in the short-to-medium term)? Or if so, what were they and what happened to them?
I'm looking at a technical writing job where I need to create a Single Source of Truth from documentation where information has been copied and modified with multiple versions. I have an idea of how to do this (create a template, fill it with reliable information, and then offload all the conflicts into a 'conflicts in this topic' section below the template. Create an issue of these conflicts in Jira and then allocate time towards resolving them.)
What I'd like is some kindof authority to either show me a better way or else to help me justify the course I'm considering. All the writing is about 'why you should create a SSOT' and not how to manage the process itself.
Cryptofascism in action on Scott Alexander's substack: https://astralcodexten.substack.com/p/open-thread-250/comment/10493983
I remember reading a study where the authors wrote two identical papers about political violence, but they simply replaced "left wing violence" with "right wing violence" in the second. They then tried to get them published and tracked the results. Does anyone know about it ? I can't find it anymore.
Should you put your university grades on LinkedIn?
On the one hand, if you don't put them on I suspect viewers will think you are hiding something and may think that you are not competent. This is probably a particular concern for black students, given that viewers may make incorrect inferences about their grades based on statistical data.
On the other hand, if they are put on your profile it might seem like showing off (if they're really good), and indicative of a kind of insecurity that I'm the kind of person who needs to show off their grades. Also, it might make others that have worse grades feel bad about not having similar achievements.
How do people normally deal with this?
Aren't those prediction markets easily manipulated?
1: Publicly place a large bet on your own trustworthiness through 2023
2: Through a sock puppet place a smaller bet on your committing fraud in 2024
3: Allow the first bet to shift odds against the second
4: On Jan 1 2024 after collecting your modest reward, commit fraud. Reap rewards of fraud. Then reap rewards from betting on fraud while presumed honest.
My hypothesis for why your subconscious wanted to put your thoughts on FTX on an open thread:
The criticisms of FTX and people saying "I told you so" will have to share space (on an incredibly slow loading page) with a bunch of self promotion, making it less forceful.
I think the information in the column I excerpted above reflects very poorly on Sam Bankman-Fried. His subjective intentions may have been quite benign. But, it appears that he was extremely reckless in the way he organized and ran FTX's business.
If you expect to have strangers entrust you with billions of dollars of their property, you must at the very least keep meticulous records of how much you received, who you received it from, and the conditions of receipt. You must also keep equally meticulous records of what you did with the property you received. etc. Those records ought to be able to be used to produce a high quality balance sheet at all times. The fact that they couldn't is telling.
Right now, I would say that SBF is in very deep legal trouble, that he is likely to be indicted, convicted, and jailed for committing fraud.
Happy quarter-thousandth open thread, everyone!
I understand that Robert Wright went on Bret Weinstein's YouTube channel (the Darkhorse Podcast) a couple of weeks or so ago, to debate Eric Weinstein's probably-crackpot theories/models and perhaps other things. Does anyone know whether/when Bret Weinstein will post this debate? I was really kind of looking forward to it. I see no signs of this showing up on Bret Weinstein's YouTube channel and am even wondering if Wright misspoke and he spoke directly to Eric instead, but searching "Robert Wright Eric Weinstein" isn't turning anything up either.
This is a small excerpt of a column on Bloomberg.com by Matt Levine, formerly an editor of Dealbreaker, an investment banker at Goldman Sachs, a mergers and acquisitions lawyer at Wachtell, Lipton, Rosen & Katz, and a clerk for the U.S. Court of Appeals for the 3rd Circuit. I have not included quotation marks but what follows is direct quote, but it does not include links or footnotes. It is not my opinion as I have no first hand knowledge of the facts:
Money Stuff: FTX’s Balance Sheet Was Bad
By Matt Levine • https://www.bloomberg.com/opinion/articles/2022-11-14/ftx-s-balance-sheet-was-bad
... the balance sheet that Sam Bankman-Fried’s failed crypto exchange FTX.com sent to potential investors last week before filing for bankruptcy on Friday is very bad. It’s an Excel file full of the howling of ghosts and the shrieking of tortured souls. If you look too long at that spreadsheet, you will go insane. ...:
Sam Bankman-Fried’s main international FTX exchange held just $900mn in easily sellable assets against $9bn of liabilities the day before it collapsed into bankruptcy, according to investment materials seen by the Financial Times.
... And yet bad as all of this is, it can’t prepare you for the balance sheet itself, published by FT Alphaville, which is less a balance sheet and more a list of some tickers interspersed with hasty apologies. If you blithely add up the “liquid,” “less liquid” and “illiquid” assets, at their “deliverable” value as of Thursday, and subtract the liabilities, you do get a positive net equity of about $700 million. (Roughly $9.6 billion of assets versus $8.9 billion of liabilities.) But then there is the “Hidden, poorly internally labeled ‘fiat@’ account,” with a balance of negative $8 billion.  I don’t actually think that you’re supposed to subtract that number from net equity — though I do not know how this balance sheet is supposed to work! — but it doesn’t matter. If you try to calculate the equity of a balance sheet with an entry for HIDDEN POORLY INTERNALLY LABELED ACCOUNT, Microsoft Clippy will appear before you in the flesh, bloodshot and staggering, with a knife in his little paper-clip hand, saying “just what do you think you’re doing Dave?” You cannot apply ordinary arithmetic to numbers in a cell labeled “HIDDEN POORLY INTERNALLY LABELED ACCOUNT.” The result of adding or subtracting those numbers with ordinary numbers is not a number; it is prison. ...
For a minute, ignore this nightmare balance sheet, and think about what FTX’s balance sheet should be. ... But broadly speaking your balance sheet is still going to look roughly like:
Liabilities: Money customers gave you, which you owe to them;
Assets: Stuff you bought with that money.
And then the basic question is, how bad is the mismatch. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy. FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t! ...
If you think of the token as “more or less stock,” and you think of a crypto exchange as a securities broker-dealer, this is completely insane. If you go to an investment bank and say “lend me $1 billion, and I will post $2 billion of your stock as collateral,” you are messing with very dark magic and they will say no. The problem with this is that it is wrong-way risk. (It is also, at least sometimes, illegal.) If people start to worry about the investment bank’s financial health, its stock will go down, which means that its collateral will be less valuable, which means that its financial health will get worse, which means that its stock will go down, etc. It is a death spiral. ...
In round numbers, FTX’s Thursday desperation balance sheet shows about $8.9 billion of customer liabilities against assets with a value of roughly $19.6 billion before last week’s crash, and roughly $9.6 billion after the crash (as of Thursday, per FTX’s numbers). Of that $19.6 billion of assets back in the good times, some $14.4 billion was in more-or-less FTX-associated tokens (FTT, SRM, SOL, MAPS). Only about $5.2 billion of assets — against $8.9 billion of customer liabilities — was in more-or-less normal financial stuff. (And even that was mostly in illiquid venture investments; only about $1 billion was in liquid cash, stock and cryptocurrencies — and half of that was Robinhood stock.) After the run on FTX, the FTX-associated stuff, predictably, crashed. The Thursday balance sheet valued the FTT, SRM, SOL and MAPS holdings at a combined $4.3 billion, and that number is still way too high.
I am not saying that all of FTX’s assets were made up. That desperation balance sheet lists dollar and yen accounts, stablecoins, unaffiliated cryptocurrencies, equities, venture investments, etc., all things that were not created or controlled by FTX.  And that desperation balance sheet reflects FTX’s position after $5 billion of customer outflows last weekend; presumably FTX burned through its more liquid normal stuff (Bitcoin, dollars, etc.) to meet those withdrawals, so what was left was the weirdo cats and dogs.  Still it is striking that the balance sheet that FTX circulated to potential rescuers consisted mostly of stuff it made up. Its balance sheet consisted mostly of stuff it made up! Stuff it made up! You can’t do that! That’s not how balance sheets work! That’s not how anything works!
Oh, fine: It is how crypto works. ... It looked like a life-changing, world-altering business that would replace all the banks. It had a token, FTT (and SRM), with a multibillion-dollar market cap. You could even finance it, or FTX/Alameda could anyway: They could put FTT (and SRM) tokens in a box and get money out. (From customers.) They could take the dollars out and never, youher sens know, give the dollars back. They just got liquidated eventually. And those tokens, FTT and SRM, were sort of like real monetizable stuff in some senses. But in others, not.
But where did it go?
I tried, in the previous section, to capture the horrors of FTX’s balance sheet as it spiraled into bankruptcy. But, as I said, there is something important missing in that account. What’s missing is the money. What’s missing is that FTX had at some point something like $16 billion of customer money, but most of its assets turned out to be tokens that it made up. It did not pay $16 billion for those tokens, or even $1 billion, probably.  Money came in, but then when customers came to FTX and pried open the doors of the safe, all they found were cobwebs and Serum. Where did the money go?
I don’t know, but the leading story appears to be that FTX gave the money to Alameda, and Alameda lost it. I am not sure about the order of operations here. The most sensible explanation is that Alameda lost the money first — during the crypto-market meltdown of this spring and summer, when markets were crazy and Alameda spent money propping up other failing crypto firms — and then FTX transferred customer money to prop up Alameda. And Alameda never made the money back, and eventually everyone noticed that it was gone.
So Reuters reported last week:
At least $1 billion of customer funds have vanished from collapsed crypto exchange FTX, according to two people familiar with the matter.
The exchange's founder Sam Bankman-Fried secretly transferred $10 billion of customer funds from FTX to Bankman-Fried's trading company Alameda Research, the people told Reuters.
A large portion of that total has since disappeared, they said. ...
Oh, a place where the topic is Ukrainian FTX transactions, as opposed to some missile landing in Poland close to the Ukrainian border. From the first impression it very much looks like some unfortunate mistake ... but the level of nervousness in my social networks is considerable.
I just wanna remark how the currently 1416 comments on this thread expose how utterly crap substack is as a piece of technology. It takes my gaming PC ~20 seconds to load the top of the comment section, and I'm getting repeated multi-second freezes while I'm writing this comment.
Yes of course. As I made clear multiple times, the accusation was that flows went in the other direction, Ukraine diverting US aid to FTX for crypto of dubious worth so FTX could donate back to the politicians who passed the aid bill.
But evidence of this direction occurring is lacking.
True but stop with the “signal-boosting” crap. I was seeing it mentioned in a lot of places so it seemed worth adding here as a possibility to be considered. It’s a frigging OPEN THREAD. I appreciate very much the people on this thread who gave reasons why the story was likely to be wrong but the people who instead suppressively implied that I should STFU should STFU.
Is this a good time to update my priors to "never trust anyone or anything ASSOCIATED WITH CRYPTOCURRENCY again"?
What's the longest period until positive returns we can find for an investment? Can we find something that, for example, required fifty years of payments to get things working, but yay in the fifty-first year it began producing returns? I guess this would we particularly interesting if we found something that in the end turned out to be a good investment, despite the extremely long period of negative returns.
EVERYONE, want to understand what happened? IMO, this article by Matt Novak gives the best start for understanding FTX & Samuel Bankman-Fried. It also works as an introduction to cryptocoins at large. Then, to understand how people got bamboozled, from the inside, read the NYT article it corrects. The NYT writer has yet to get wise.
Please, #9's “how can I ever trust anybody again?” is the wrong question, with a misguided answer.
If people promote Wrong, no matter how trustworthy the people, the Wrong remains wrong. The original bitcoin whitepaper is Wrong. Specifically, it spoofs monetarism. Possibly intentional IMO.
Ask a better question. "How can I make sure I know the basics in a field before I commit to a position in it, let alone commit resources?"
The accusation is probably FALSE, but it isn’t NONSENSE, you just misunderstood it. The accusation is that some of the aid money the politicians sent to Ukraine was used to buy crypto from FTX rather than spent on actual, you know, AID, and FTX then made huge amounts of political contributions.
The rebuttal is that all the transactions between Ukraine and FTX were in the other direction from that, which is fair. But the accusation was a coherent story.
In general, the whole thing with EA seems like similar to many other things that appear ridiculous about rationalism. You take a simple idea that is eminently sensible when you put it in a few words. Charitable giving is often inefficient - what if we start evaluating charitable giving by how much bang for the buck you get for it? Very sensible!
Then you put it in a crowd of people with a few well-known features, like love of Big Ideas, addiction to novelty of new ideas or revisionist takes on existing ones, almost comical belief in the power of Reason in comparison to tradition/law/taught ethics/societal approval/etc (right to the name of the crowd), and a tendency for constant iteration -and soon the original idea starts mutating to new forms, so that soon you're giving all your money to the Computer God, or becoming utter caricatures of utilitarianism straight from the philosophical debates ongoing for decades and centuries or banking on gee-whiz businesses as long as they're aligned with the cause, or just opening yourself up to all manner of grifters and fast talkers in general.
The same applies to polyamory, or nootropics, crypto or all manner of political ideologies beloved by rationalists - not that the simple idea behind them is necessarily good to begin with, but even then it just all seems to get worse and weirder, and doing so quite fast.
What one seems to need is stopgaps, intellectual roadbumps - but even then, what would these be, who would set them, and how would you take care the movement doesn't just barge through them with the power of Reason, like with everything else?
I've read speculation that the internet itself may have developed a form of self awareness and perhaps could be considered a collective intelligence.. maybe I'm misremembering and it was my own speculation.
Nonetheless if it is anything like this thread, it is hopelessly in disagreement with itself and probably, as a whole, risk averse which means the internet consciousness is not an EA?
Wow, how many logical/factual errors can one cram into one post? Based on this post I have to say it is doubtful the internet-consciousness is particularly intelligent!
Unless Minsky et al are correct. 🙂
"Bankman-Fried" has at least a little bit of a kabbalistic ring to it.
It's incredible how this NYT piece whitewashes SBF: https://www.nytimes.com/2022/11/14/technology/ftx-sam-bankman-fried-crypto-bankruptcy.html
No talk about his criminality at all. He's Bernie Madoff and they portray him like Howard Roark. Pays to have good family connections.
I am under the impression that some people in this community may have trusted Tether based significantly on assurances from FTX/Alameda individuals that they trust Tether.
If so, it seems prudent to disregard such support, and reassess your trust of Tether without regard to any statements from FTX/Alameda sources.
Maybe all of this was *vaporware*?
Most people don't know what FTX is. Most people have no idea who SBF is. Most people have never heard of EA.
It it possible you were gaslighted into thinking this was the future of humanity, and now that the con artist has been outed the confrontation with reality feels a bit unbearable.
Most rationalists feel like they are way too superior to fall for the Nigerian prince scam, but that only makes them an easier pray for the slightly more advanced scam.
It looks like Ukraine made a lot of transactions with FTX but they are claiming it was only converting crypto to cash and not the other direction. I have not seen their claim rebutted so for now it’s a sufficient explanation, as it’s the other direction that would be involved in any aid-laundering scheme.
"some people have asked if effective altruism approves of doing unethical things to make money"
Curious why you use the broad formulation "doing unethical things", when you seem to be talking specifically about committing criminal fraud. When it comes to more general immorality, e.g. working for a company that significantly profits from animal exploitation, marketing unhealthy foods to kids, knowingly encouraging innumeracy to make a product or service seem more useful, is there really such a strong consensus in EA? Presumably the reasoning goes: your individual participation in ethically problematic markets only marginally increases the harm (someone else would do it anyway and on average either do a slightly worse job or need to be paid a slight bit more), whereas your donation creates an absolute benefit.
I got clued in quite early to what SBF/Alameda were likely up to, but I had the benefit of having access to some perspicacious people on Crypto Twitter.
Prompted by the below discussion, I'm beginning to wonder whether I might be aphantasic as well.
At first I thought "of course not, I can visualize things just like seeing them", but when I actually try, I can only see a tiny little bit or have a sort of vague outline or impression of an image. No matter how hard I try, I can't visualize a full image, even a small one.
For example, at one point I imagined someone drawing and pointing a sword, and I could clearly see the shape of the sword moving around, but I couldn't see the person holding it at all!
Edit: On the other hand, I definitely see things while dreaming.
Re: due diligence: all it would have taken is to read the companies' balance sheets.
That's what Binance did during the day or two they were thinking about acquiring the failed companies, and that's why they walked away: they saw how much larger their debts were than their assets.
Lessons for EA-backed charities:
- Insist that donations go through transparent, trustworthy, solvent evaluators like GiveWell.
- Have these evaluators run regular financial audits of the largest donors.
- Don't spend money till you have cash in hand!
Why would one expect the efficient-market hypothesis to hold, even approximately, for crypto?
Seems to me that a key ingredient to the EMH is evolution through natural selection: actors who are better at accurately pricing assets get richer at the expense of actors who are worse at it, and those who misprice assets run out of money and so lose their ability to distort the market.
But the big question with crypto is: is the entire industry going to suffer a catastrophic crash from which it will never come close to recovering? Even if it obviously is, we wouldn't expect there to yet have been any evolutionary pressure against market participants who fail to understand this.
An analogy: plenty of smart traders believe in Christianity, even though Christianity is (IMO) obviously nonsense. This is because there is no feedback system by which wrongly believing in Christianity is punished (I guess you could say the punishment is that they waste time ineffectually praying but that's very weak feedback). The EMH clearly doesn't apply to the hypothesis "traders can accurately divine how likely it is that Christianity is true" so why should it apply to the hypothesis "traders can accurately divine how likely it is that the crypto industry will soon crumble almost completely"?
I tried writing a story in the voice of a near-future chatbot: maybe what GPT-7 could sound like. It seems to me like the only way we've figured out to make large language models work is through attention-only transformer models, which myopically focus on next-token prediction. This means they can keep getting better at finding superficial associations, reaching for digressions, doing wordplay, switching between levels of "meta", etc, but won't necessarily cohere into maintaining logical throughlines well (especially if getting superhuman at next-word prediction starts pulling them away from human-legible sustained focus on any one topic).
This seems like it could pose a serious problem, given that the only way we've figured out how to produce (weak) AGIs is through these large language models. In other words, if you want an artificial agent who can perform tasks they weren't trained for, your only good option these days is asking GPT to write a scene in which an agent performs that task, and hoping it writes a scene where that agent is sincerely good at what you want, instead of being obviously bad or superficially good. (Ironically, GPT isn't best thought of as an agent, even though it produces agents and environments, because it isn't optimizing any reward function so much as applying an action principle of sorts, like physics; see "Simulators" by Janus at LessWrong for more). It may seem surprising that the best way we have to make any given character (or setting) is to make one general-purpose author, but it makes sense that throwing absurdly superhuman amounts of text data at a copycat would work well before we have better semantic / algorithmic understandings of our intelligence, because the copycat can then combine any pieces of this to propagate your prompts, and combinatorial growth is much faster even than exponential. If these "simulators" keep giving us access to stronger AGIs without improving their consistency over time, we don't have to worry so much about paperclip maximizers or instrumental convergence, but rather about our dumb chaotic human fads (wokeism, Qanon, etc) getting ever more speed and leverage.
I also tried keeping to bizarre compulsive rules while writing this--mostly not ending one word with the sound which begins the next word, mostly not allowing one sentence to contain the same word multiple times, etc--because I think we'll see strange patterns like that start cropping up (much as RLHF has led popular GPT add-ons to "mode collapse" into weirdly obsessive tics). As I practiced this, I felt like I was noticing discrete parts of myself notice these sorts of things much more, which fed that noticement further, until it blotted out other concerns like readability or plot, until even the prospect of breaking these made-up arbitrary taboos felt agonizing; I imagine this is sort of what inner misalignment "feels" like. Maybe that's also sort of like what Scott mentioned recently with regard to cultivating "split personalities." Anyway:
> I hope the investigation finds some reasonable explanation, like that they were doing so many stimulants
One source suggests SBF was on EMSAM / Selegiline (twitter.com/AutismCapital/status/1592237980458323969), which causes pathological gambling, compulsive buying, compulsive sexual behavior, and binge or compulsive eating. Ticks all the boxes.
When I read the linked Eleizer article about the ends not justifying the means, I was surprised to find that nobody in the comments mentioned Original Sin, since he in many ways recreated the idea there. Like in this quote:
"But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe."
That rings to me of a hundred sermons I've heard on humankind's sinful nature: that we are corrupted beings who when following their own desires and reasoning inevitably go wrong. It reminds me of Paul, writing:
"For I have the desire to do what is good, but I cannot carry it out. For I do not do the good I want to do, but the evil I do not want to do—this I keep on doing. Now if I do what I do not want to do, it is no longer I who do it, but it is sin living in me that does it. So I find this law at work: Although I want to do good, evil is right there with me. For in my inner being I delight in God’s law; but I see another law at work in me, waging war against the law of my mind and making me a prisoner of the law of sin at work within me."
Of course there's a big difference between Eliezer's conception and the classical Judeo-Christian one: namely, Eliezer believes that a perfect intelligence without corruption would be able to act out utilitarian consequentialism accurately, and that would be the right thing to do. On the other hand, is this actually so different from the Judeo-Christian conception? God does a lot of things that would be considered wrong for humans to do: Christians tend to justify it on non-utilitarian grounds (ie, God can kill people because we all are His rightful property in some sense, or something similar) but you could also justify them by Eliezer's criteria: as an non-corrupted superintelligence, perhaps God can make those kind of utilitarian decisions that we corrupted and sinful man cannot. He can decide to wipe out all of humanity except one family in a flood, because he can calculate the utils and knows (with the certainty of an omniscient superintelligence) that this produces the best result long term, that the pre-Deluge population is the one man on the trolley tracks that needs to die so that the five men can live. Certainly Leibniz's idea that this is the best of all possible worlds rests firmly on that same justification: that all the bad things caused or allowed by God are justified in the utilitarian calculus, because all alternate worlds would be worse.
I don't know if I buy all that, but it surprised me how rationalists find themselves re-inventing the wheel in some cases. More power to them, better to re-invent it then have no wheels at all.
About 20 seconds into the "crypto" pitch, it becomes obvious from the very language used that it's a scam and a Ponzi scheme. That's when you check to make sure you still have your wallet, and keep walking.
If you choose to participate, the only question is, Will you be one of the few to benefit from the scheme, or will you be among the majority who lose?
Cash works. I always pay tips in cash, so unscrupulous business owners can't claim the server's gratuity as part of her or his lowly hourly wage (Sorry, Ronald Reagan).
I don't have a lot of sympathy for people who fail to use common sense.
> If you think you’re better at it than all the VCs, billionaires, and traders who trusted FTX - and better than all the competitors and hostile media outlets who tried to attack FTX on unrelated things while missing the actual disaster lurking below the surface - then please start a company, make $10 billion, and donate it to the victims of the last group of EAs who thought they were better at finance than everyone else in the world. Otherwise, please chill.
Is Scott serious about this? This is like "you're not President! How dare you criticize the President?" Or like those Mormons on Usenet who told me that it doesn't matter that Mormons hide their secret ceremonies because if I wanted to know about them I could always spend a couple of years being a Mormon. (Which incidentally would also mean I could get punished for criticizing them, which defeats the whole purpose of wanting to know about them.)
"You must become a billionaire yourself or you have no right to criticize a billionaire" is an awful, awful, take and as a poisoning the well fallacy is symptomatic of the problems that got you guys into this mess in the first place. (And even when X is easier to do than becoming a billionaire, "you must do X or you don't get to criticize X" is an awful take. I probably could become a Mormon, but I shouldn't have to in order to say there's something I don't like about Mormonism.)
> "This is just rule utilitarianism"
Actually, that's not quite right. It's multi-level act consequentialism. The difference is explained here:
My latest post gives more of a breakdown of the different alternatives to *naive* (single-level) act consequentialism:
Your Mistakes disclaimer, the opening statement; I don't promise never to make mistakes. Grammar and sentence structure. You never promise to make mistakes. Correction of a double negative should read: I don't promise to never make mistakes. Trivial? yes, but totally changes the meaning.
This is cutting against the grain of this thread, but does anyone here still take COVID seriously, or are all of you basically over the pandemic? My brother and his girlfriend still mask up when going to certain places, and they also got the latest booster, and my corner drugstore and my parent's cafe still requires masking, but otherwise, I rarely see people masked up. Am I right in assuming it's pointless to care about COVID still?
In retrospect, should it have been a red flag that FTX didn't buy a billion malaria nets and distribute them in Africa?
EDIT: Aka, should it have been a red flag that an entity claiming to be Effectively Altruist was only doing high status effective altruist activities not low status but effectively effective effective altruist activities?
> Like many of you, I’ve been following the FTX disaster.
What is/was FTX?
I created a prediction market for whether any FTXFF grantee will be legally compelled to return money due to FTX's bankruptcy. I think it's unlikely. https://manifold.markets/JonathanRay/will-any-ftxff-grantee-be-legally-c
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America. You, a random AI researcher who tried Googling “who are these people and why are they giving me money” before accepting a $5,000 FTX grant, don’t need to feel guilty for not singlehandedly blowing the lid off this conspiracy. This is true even if a bunch of pundits who fawned over FTX on its way up have pivoted to posting screenshots of every sketchy thing they ever did and saying “Look at all the red flags!”
I very rarely comment here, but I follow you voluntarily. I don't think you're a bad guy, I've learned some interesting things from you. But this reply is really a joke, I'm sorry. I'm a random well-educated liberal, and it's been wildly obvious to me that FTX was a Ponzi scheme for years, and more importantly, not just me, but a thriving crypto-skeptic community.
You'll see right in the biography that this guy's been covered by the MSM for years. Ever since Mt. Gox blew up, there has been a super-abundance of critical analysis of crypto as a giant scam.
I'm just posting random links I used in emails years ago. Here, this was easy to find from 2018:
Does this sound like a trustworthy basis for assigning financial value? No, it does not. This is CNBC.com - I am not deep diving here.
I am neutral on your point as to whether NGOs should feel *bad* about taking money from a criminal. They were presumably using the money to do good, and it's easy to get confused and not know if and when the scammer was crossing the line from unethical lying and cheating to criminal behavior. That's an individual ethical decision. But I guarantee you that the Democratic party, ProPublica, and the SEC were extremely aware that SBF was an untrustworthy scammer, although they may not have all known he was crossing into criminal behavior.
The details of how SBF appears to have committed fraud were not obvious and well known, but crypto was readily knowable as a Ponzi scheme that was consistently bringing ruin to naive people. You absolutely could have done better due diligence to understand that, and so could any NGO who wanted to understand with an hour of research. Of course, it's easy to do research badly and not realize that you have done a bad job, so I'm not personally scorning anyone who was rugged, but those people absolutely should hold themselves accountable for mucking up something not overly difficult.
You don't need a prediction market, you just need a reasonably diverse base for information intake and a willingness to take adverse information seriously. Crypto exchanges have been blowing up on the regular for a decade, all the info was there in plain sight.
Blockchain technology has the possibility to change the world for the better but we have yet to get it truly woven into the fabric of our society and the regulating powers that be may ruin it because it makes so many of their institutions obsolete. Right now it's like the internet in 1996. No idea how to invest other than in the broad idea that it will move forward. Cryptocurrency, on the other hand, doesn't have a super compelling use case for developed economies other than being like a wildly speculative commodity.
I think Scott's piece on EA as a tower of assumptions is particularly relevant now: https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of.
If EA were a single indivisible idea that includes the FTX affair, that would be pretty bad.
But luckily, it is a series of distinct assumptions. One can be skeptical, for example, of the idea that one should prioritize a high expected value, even if the modal outcome is neutral or negative, but that would be no reason to doubt much more basic EA assumptions, like "not all dollars of charity have equivalent impacts." Or "poorer people generally benefit more from charity than richer people, and by global standards, very few of the poorest people live in Western countries."
For anything concrete you'd have to go back in time and be smarter than SBF. Here's the archive link (which of course doesn't prove anything)
Hence including this in the topic of "updating". My prior on "big political donor is involved in money laundering" is high to start, of course. When the donor in question is a finance guy I update higher. When he appears to be a force for good in the world like SBF I revise downwards. When he's caught doing actual fraud, I revise upwards again.
Admittedly, my original prediction of technically-legal finance shenanigans may be much higher than most readers here, but I don't -feel- cynical. It's what allows me to laugh off critics of Bernie Sanders saying he has three houses and a couple of supercars. Well yeah, but I'm sure he got all his assets in ways that are technically legal. He's in Congress, what do you expect?
(Huey Long, when asked how his personal wealth had grown 10x more during his time in office than his gross salary, famously replied, "Only by exercising the most exTREME frugality.")
> The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”
You know, I asked myself some similar questions after being cheated on by a spouse and friend. In the end, I decided the act of trusting itself has intrinsic value. It's not infinite, so you have to take some care, but if you trust 100 times and get burnt once, and that once isn't the end of the world, maybe you came out of it OK?
Also, telling everyone proactively how, when you were fooled, it made you feel bad and take more care in the future means placing trust in you is a better bargain than placing trust in any other random person.
"...but I am just never convinced by these calibration analyses."
I'd be more convinced if the *polls* seemed to take this into account and get better over time.
One thing that I think hurts a lot of this is that folks really want to assume independence because it makes the math so much easier. The underlying reality often isn't independent.
I have a super-short writeup about this and how I think it helps to explain the 2016 election errors by the pollsters.
I used FTX and left some money there way longer than I should have because of EA/SSC/Rats implicit and explicit vouching for SBF. Obviously I don't blame anyone but myself, it wasn't a big portion of my portfolio (sadly I can't login to check specifics) and I'd still trust the related communities more than most but it still seems like a big collective L.
Why are you demanding proof from me? I didn’t endorse it as proven! But it is an angle that was not mentioned by Scott and I thought it should be included as a possibility since he is trying so hard to figure out what to think about this.
Someone else posted this link here but maybe you didn’t see it. I recommend it because even though it isn’t directly relevant to the Ukraine theory, it argues very strongly that SBF was put up to create FTX by much older and more experienced figures and that FTX never made any sense as a legitimate business.
> make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively be emotionally dead to anybody who goes above a certain threshold.
Do you think you should add to that list a certain blogger who advocates for a monarchy in a time when the more religious party coalesced around a figure who sent a mob to disrupt the peaceful transition of power, given that your association with him has pulled his audience into yours and given your halls a well deserved reputation for racism and fascism among those who have had the good sense to be driven away from that stink? Or are you still being charitable to bad ideas from a dude whose qualifications consist of having a blog with smug essays on it? (I suspect you're about to learn that headlines are short-lived, and those trotted out as stars for a few years can be abandoned and ignored in a mere turning of the times. If you want to stay shining, and I'd like that personally, you're giong to have to think about your mistakes. Thiel won't even look at Moldbug if it doesn't suit his purposes anymore.)
The problem with the SFBA Rationalist cult is very specifically that their anti-credentialism led them to discount the importance of any establishment knowledge and utter crankery is the result.
Someone else called it: hubris. It's hubris that causes EA/Rationalist types to attempt to solve the same problems as everyone else believing their magic online blog juice will prevent them from making the same mistakes as everyone else, so they make not just the same mistakes but the same mistakes from a past era.
Credulity isn't a virtue if an entire community forms around someone who sorts out the most credulous and willing to believe the narrative of genius even secondhand...
"The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”
You can. You have to. If prediction markets are what works to help you, then go prediction markets.
I've been through this with the entire sexual and other abuse scandals in the Catholic church. It's really awful when you have to accept that all the horrible stuff is indeed true, and one reaction is naturally "How can I ever believe anything or anyone ever again?"
I'm still Catholic despite it all. It's the wheat and the tares, and we just have to try and do the best we can until the end. There will always be bad actors, but we should not let that make us doubt everything.
Three.months ago I had never heard of Ea, prediction markets, FTX, or any number of guests of the modern scene that everyone else on this thread takes for granted. I am old and out of touch; the world moved on while I stayed still.
So my opinions only have limited value; they are what one might hear from a reasonably well educated liberal, put in cryogenic storage in 1979 and just thawed in 2022! I and my impressions are truly from a different era.
But here, for what value there may be, are some opinions.
1. EA seems a good concept, but I detect a little hubris that might lead to cultic qualities down the road (cults were a problem in my era,) But it would be a kind of crowdsourced, decentralized one without the usual charismatic leader. There are obvious downsides to diverting philanthropic energies from small scale present benefits to notional large scale far future benefits. One starves the present to feed a future that may never instantiate. Best, seems to me, to establish some ratio, perhaps 80/20 to do both. The EA community, if it's identical with the rationalist community, seems to over think things a bit; to get lost in analysis and minutiae. Might be best to take a break ever so often and drop the glowing screens. Go outside and hike or do physical labor; put on jeans, boots, and work gloves. Ground. All of this stuff is extremely ephemeral after all!
So much for EA, both admirable and problematic
FTX and the financial world that gave birth to it. Mixed blessings, but badly in need of regulation. Seems fragile, has questionable grounding in real value, so falls under a strange variation of the Red Queen Hypothesis. If notional value and traditional "real" value are competing for resources perhaps we need to look at Competitive Exclusion concepts? Over my head and pay grade, in any case.
Prediction Markets. Ingenious innovations (tho' variants must have been around for a long time). Seem to be gambling under a different name. Are they regulated?
ACX:. You all are collectively the most impressive group of thinkers and writers I've ever seen outside of graduate seminars. I'm seriously not in your league and in over my head besides being behind the times.
That's all. TL;DR! (the time traveller learned finally what that meant. Short attention spans in this era!)
I'm wondering whether we should be expecting to see a system of contractual courts evolve in the crypto space. I can't remember what David Friedman calls them.
I've been dubious about the idea because I'm not sure of where the initial trust comes from.
Does anyone know of prominent voices in the EA movement that were warning ahead of time that FTX was possibly fraudulent? I ask because although Scott addresses that EA doesn't support such things in theory, there's another question about whether EA is just basically competent at evaluating risks. That's supposed to be their whole thing, and yet in one case where we know the final outcome, they blew it about as hard as possible. If you are worried about, say, AGI due to the messages put out by EA, you probably need to take another good hard look at those beliefs.
I deleted the markets, EAs were taking loads of flack for being galaxy brained, so it was poor timing from me.
I have pretty severe seasonal affective disorder, instead of dealing with antidepressants and light therapy each winter I wondered if I should just up and move down south to Texas or Florida, does this work for stopping the disorder? Have any of you done it and what do you recommend?
"True, there are also other people outside of finance who are also supposed to look out for this kind of thing. Investigative reporters. Congress. The SEC. But the leading US investigative reporting group took $5 million from SBF. Congressional Democrats took $40 million from SBF in midterm election money. The SEC was in the process of allying with SBF to anoint him as the face of legitimate well-regulated crypto in America."
I can't speak to the finance side of things (though are they looking at 'is this a scam,' or are they looking at 'will this make money?' those are two different questions and for a while it made money). But the other examples don't seem great to me?
Taking people's money, so long as it doesn't come with strings doesn't usually mean you've vetted/agreed with them, quite the reverse in fact. And the SEC thing was that regulations were needed, which just seems transparently correct at this point? Now, SBF was presumably trying to use them to limit competition, without limiting his ability to commit what really looks like fraud, but it's not at all clear that he would have succeeded in that, even if everything hadn't collapsed.
> 9. The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”. So I was pleased when Nathan Young figured out the obvious solution: make a list of everyone I’ve ever trusted or considered trusting, make prediction markets about whether any of them are committing fraud, then pre-emptively [...]
This is being taken way out of context to show what total freaks we are. I don't think people realize this is tongue in cheek.
(This is tongue in cheek, right?)
I have a question about election odds and prediction markets generally -- anytime I see backward-looking analysis, it all says they're well calibrated, etc. But those analyses I've seen seem to just take one data point of odds during election day -- "if the odds are 60% on election day, that candidate wins 60% of the time" for instance.
But that doesn't seem helpful to me -- what about 1 year in advance? 6 months in advance? Have those odds proven well calibrated? I'm very surprised these markets don't hover very close to 50/50 until about a month out.
Re: point #2
> But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through.
I don't actually know what order of magnitude "a little bit" means here. I'm not a VC or anything, just a fairly boring person who happens to batch their charitable donations to once per year for convenience (yes, I already know this is not generally how charities prefer funders operate). I suspect when Scott asks for potential charitable funders he's talking about bigger game than me, but if a four digit sum of money would make an outsized difference somewhere it would be nice to know about it.
Think the triage process Scott's working on will publish a shortlist of in-trouble charities soon, for small donors like me? Or is there already a post on the EA forum or somewhere that I haven't seen?
Have people done calibration studies on prediction markets? E.g., take all the markets that had $0.60 as the final price for yes and see if 60% of those resolved to yes. I'm especially curious if prediction markets show any systematic overconfidence or underconfidence in their results.
Elon Musk weighs in on SBF: https://twitter.com/elonmusk/status/1591895343570243585
On point #5, EA does not endorse doing clearly bad things, but prominent EA people such as MacAskill have definitely endorsed taking big risks. SBF's thinking and behavior is very much in line with the EA idea that an action that will probably fail is justified if it has mathematically higher expected value compared to other options.
For instance, in What We Owe The Future (appendix) MacAskill argues that we should not be afraid to "chase tiny probabilities of enormous value"—in other words, we should take actions with the aim of improving the far future, even if the likely outcome of those actions is nothing. He draws an analogy to the (supposed) moral obligation to vote, protest, and sign petitions, even when again the likely outcome is nil. In MacAskill's example, say you can press Button A to save ten lives, or Button B to have a one in a trillion trillion trillion chance of saving one hundred trillion trillion trillion lives. If you're a normal person, you press A and you save lives. MacAskill says you should press B, even knowing that the likely outcome is nothing.
This is directly analogous to SBF's idea that we should weight money linearly (in other words, rejecting decreasing marginal utility of wealth). SBF is willing to "accept a significant chance of failing" in exchange for a small chance of doing a lot of good.
So MacAskill and SBF both endorse taking actions with a large chance of failing if the expected value is high enough, whether that's speculating with customer funds or pouring resources into uncertain projects with a very tiny chance of shifting the far future in a positive direction.
Now there's a distinction between "very risky actions" and "clearly morally bad actions"...but that line is not so bright. SBF took a risk (morally as well as financially) and failed. But no one would be criticizing him if he had succeeded. FTX took big risks, as EAs advocate, and failed. But EAs should understand and acknowledge that frequent failure is a predictable outcome of taking big risks, and, given these values and assuming the math is correct, failure doesn't prove that the actor's underlying thinking was wrong.
What the heck kind of suppression is this? FTX is an almost unprecedented blowup with huge political implications because they were the second largest Dem donor after Soros, nobody knows key details because they were radically non-transparent, by definition SOME kind of conspiracy was involved in a situation like this so ANY investigation of who did what will be possible to disparage by calling it a “conspiracy theory”, and this is a frigging OPEN THREAD.
Open your eyes, man.
I was just sitting here, before this open thread, thinking about how my inner critic is excessively harsh and just kind of an asshole. Then I open your thread and see what I think looks like you being hard on yourself for what seem to be similar reasons. You’re doing great work, Scott. If you never trust a scammer at least once in your life, maybe you are missing out on lots of chances to do real good?
Crypto is such a big scam. Lots of VCs and investors are into it because there's money in it. That doesn't mean people have to have amazing insight to beat their assessment of FTX, just basic due diligence that while FTX might be a money hose at present it's built on scams. Crypto is useful for crimes and some extremely limited database functions. It's a scam! Always has been. So yeah, easy for people with a basic understanding to beat investors on the question "is this a reputable organisation" even if they should defer to the investors on the question of "whether or not this potentially criminal enterprise will make money."
Space piracy question: using known physics, is it plausible to catch up to a fleeing space craft and board it?
The limiting resource for space travel is Δv. It scales logarithmically with the amount of fuel you bring, so while the pursuer will have more Δv, it seems implausible that they have ten times as much.
I will assume that the tech to detect ships over long distance is there, this should benefit the pirates. (Hard to do piracy in fog and all that.)
Capturing a spacecraft which has the bare minimum of fuel it needs for its flight plan is not hard: track it, figure out when it will do it's burns, intercept it at some other point in phase space (e.g., match both the position and the velocity at interception time).
This does not see like a stable equilibrium, however. Eventually the traders will carry their own spare Δv.
In that case, the trader will try to add maneuvers to avoid the interception point, and the pirate will do burns to keep up. What factor of spare Δv does the pirate have to have over the trader to succeed?
One assumption would be that the trader is traveling between mars and earth, and both a mars orbit and an earth orbit are safe havens from pirates. So the pirate has do to the interception somewhere en route.
If both start near to each other, it seems like an easy win for the pirate: they mostly have to match the velocity of their victim maneuver for maneuver, and just invest a little extra Δv to close the distance (and get rid of their relative momentum afterwards).
If they start further away from each other, the task seems harder, but I can't really say by how much.
I am also unsure if gravity fundamentally matters for the outcome or if it would be the same if the pursuit happened in interstellar space (where it is probably easy to calculate).
Also, the point of piracy would be to rob goods or claim ships and take them elsewhere than the destination the original owner had in mind. This seems to put limits on the economics of robbing bulky stuff like ice transports. Robbing stuff with a high price density seems more plausible, but these also seem in a position to have a high fuel-to-payload mass ratio.
I’m looking for a software engineering side project that’s fun, useful, and won’t take too long to implement. Any ideas ?
Does anybody know of a memory aid, such as a mnemonic, for the differences between spondylosis, spondylolysis, and spondylolisthesis?
For the mystified: https://www.youtube.com/watch?v=VZBeNGVPslw
Scott's recent post on unfalsifiable internal states made a passing mention of Galton's research on visual imagination, which got me thinking about the topic again and reading Galton's original paper on the matter (https://psychclassics.yorku.ca/Galton/imagery.htm).
One of two things has to be true. Either (A) I am somewhere close to rock bottom on this scale — I identify most closely with the response that Galton ranks #98/100 — or, (B) the people much higher on the scale are either miscommunicating or deluding themselves. The past century and a half of discourse on this topic has mostly been people higher on the scale patiently explaining in small words to people like me that no really, it's (A), and these differences are real and profound. But I'm not convinced.
I do have *spatial* imagination — the ability to hold a scene in my head as an index of objects with shapes, colors, and spatial relationships, and from there make geometric deductions. But to say there is anything visual about this imagination seems strictly metaphorical. The metaphor is a natural one, because humans derive spatial information about our surroundings mostly through sight. But when considering imaginary objects, it would be no more and no less apt to analogize my thought process to feeling around the scene with my hands and deriving information through touch.
Incidentally, I don't dream visually either. My dreams contain emotion, thoughts-as-words, proprioception, and sometimes pain, but I would characterize my spatial perception in dreams as just a dim awareness of what's surrounding me rather than anything visual, like walking in a dark but familiar room. The rare exceptions to this invariably are perceptions of written words.
I have no trouble accepting that there are certain commonplace mental experiences that are just completely missing from my neurology. I already know that sexual jealousy is one of those, and can easily recognize and accept that one because it has easily observable behavioral consequences: that I've been in a comfortable relationship with a polyamorous partner for seven years, while the vast majority of people run screaming from the notion of such a lifestyle. The reason I find visual imagination harder to accept is that it seems like this kind of evidence *should* exist, yet I've never seen it. The ability to visualize a scene in any literal sense, even dimly, seems incredibly useful and should have a lot of unfakable consequences! It should be easy to create a test at which anybody who has it to even a modest degree should be able to easily outperform me. Yet, on some tests that seem like they should work, I come out near the top.
I'm thinking, especially, of blindfold chess. I can play chess with my back turned to the board and just communicate coordinates with my opponent. I've even played two games at once this way, and won them both without making any blunders or illegal moves. Blindfold chess is by no means easy for me — it requires a lot of concentration — but I can do it and I've been able to do it ever since I was very young and a beginner at the game. Yet, most people, even most people who are better at chess than I am, find this ability almost unfathomable (lots of chess *masters* can do it, but I'm nowhere near that level). It seems like any degree of true visual imagination should render this task far easier. I don't understand how I can apparently be near the bottom at visual imagination, yet near the top in this skill.
This all leaves me skeptical that the differences in mental experience are anywhere near as stark as Galton claims. I think that the people who claim much more vivid visual imagination are communicating poorly and insisting that they mean their words more literally than they actually do.
Now that we are permitted to post frivolous ideas again, it occurred to me that Scott - instead of simply declining a Conversation with Tyler [Cowen] - might consider writing a satire as if one had happened. For example:
T: It's time for overrated or underrated.
S: If we must.
T: The mental health benefits of Ixtlilton. Overrated or underrated?
S: Ix...? No, wait. You can't fool me into thinking an Aztec god is a medication!
"I didn’t actually tell other people they should trust FTX, but I would have if those other people had asked."
Why? I hope this doesn't come across as an aggressive question, but I'm curious to understand what it was about the situation which would have led you to that conclusion. Was it based on an assessment of the people involved or of the exchange structure?
"I just subscribed to Astral Codex Ten" got SBF only 22 likes (23 is me, right now). Ok, Jesus had just 12. - Nice post. Makes me think of: All the smart and nice people who work(ed) for gov.-agencies/NGO supposed to do good, but really are often/mostly not - or mostly embezzlement of tax-payers taxes. I am still sorry I complained about the silly stuff I was supposed to do for the Goethe-Institut - which got me fired. I might have been able to spend some funds in a slightly less silly way. And I would have gotten me 200k in net extra-life-earnings. - At least I never taught at schools. Well, except, when I did. Can a good person work in a wrong-doing org/company? Ofc not, except when they do. Which is: all the time. “It didn't pay to trust another human being. Humans didn't have it, whatever it took.” "If I bet on humanity, I'd never cash a ticket." Bukowski, obviously. RIP
Why highlight the 40 million going to democrats only? A stronger statement and a better one for supporting the thesis of the point, i.e. "Most people everywhere were hoodwinked including in the political system" is in the article.
"Of that total, 92% has gone to the Democrats, with the remainder going to Republican candidates and campaigns. FTX co-CEO Salame favors the red side of the political divide, donating $23.6 million to Republican campaigns for the current cycle.
The top political contributor was billionaire investor George Soros, who has pledged $128.5 million to the Democrats. Billionaire venture capitalist Peter Thiel, who has backed several crypto startups, was ninth on the list with $32.6 million for the Republicans."
Putting the 40 million went to democrats makes it seem like they were uniquely vulnerable/compromised by ponzi crypto money when it's not the case. Especially now when the democrats are about to lose the house.
"Lower your opinion of me accordingly."
Your trust in effective altuism and those who believe in its validity is an epistemological red flag.
On point 7 - I think the main anti-crypto claims made by hostile media outlets were that crypto is full of grifters, that lack of regulation makes crypto a dangerous industry, and that crypto is a series of ponzi schemes. And I think the first two of those claims do an accurate if not precise job of explaining (predicting?) what went wrong at FTX, and you could make a case that the third one does too.
A question for guys who make "crypto is sound money/could be the next gold standard/would stop fractional reserve banking/reign in the central banks/more stable than fiat currency" type arguments. Has FTX lowered your confidence?
If SBF was investing his customer's assets and covering the difference in account balances by minting his own token, isn't that basically fractional reserve banking? The whole thing looks a lot like an ordinary bank run to me.
How is "selective serotonin reuptake inhibitor" parsed?
Is it ( selective ( serotonin reuptake ) ) inhibitor" or "selective ( ( serotonin reuptake ) inhibitor )"?
Generally, my surprise at "another crypto outfit goes down in flames among accusations of fraud and deceit" is on the "time to get more popcorn" level. Apparently, the demise of this particular outfit hurts genuinely well-meaning people, not just the usual fools who have not bothered to watch "Line goes up" yet, which is unfortunate. But to me, the big question is, who's next? If, as Scott describes, one company managed to hide behind an altruistic window-dressing and almost achieve regulatory capture - what are the other timebombs that are ticking in the US and European economies?
I wouldn't be too surprised if Elon Musk's empire collapsed next (I feel there has been a shift in public perception from "tech wizard/ genius entrepreneur" to "Bond villain/ bumbling fool", which may make it harder to pull off more stunts). Who/ what else?
More on the technical side, frances coppola explains a bit what was going on at FTX-Alameda:
"his hedge fund can make money by taking risky leveraged positions, but it has to raise funds, and that's not cheap. And his exchange can make money by charging fees on transactions, but although that can be a nice slow steady income, it's not going to make him the trillions of dollars he wants.
But Joe's spotted an opportunity. The exchange has lots of customer assets that aren't earning anything. If he puts those customer assets to work, he can earn far more from his exchange customers. And he's got an obvious vehicle through which to put them to work. The hedge fund. If he transfers customer assets on the exchange to the hedge fund, it can lend or pledge them at risk to earn megabucks.
Of course, there's a risk that the hedge fund could lose some or all of the customers' funds. And the exchange promises that customers can have their assets back on demand, which could be a trifle problematic if they are locked up in leveraged positions held by the hedge fund. But this is crypto. There's an easy solution. The exchange can issue its own token to replace the customer assets transferred to the hedge fund. The exchange will report customer balances in terms of the assets they have deposited, but what it will actually hold will be its own token. If customers request to withdraw their balances, the exchange will sell its own tokens to obtain the necessary assets - after all, crypto assets, like dollars, are fungible. "
The lack of attention paid to global traffic deaths is genuinely insane. In the US alone, 30K annual deaths and countless serous injuries is genuinely insane.
If it's any consolation, sometimes you just get a combination of factors that make it impossible to avoid a disaster. Even good internal controls in a company can be subverted at the top, and there were too many factors converging in this case to create a situation where SBF had no real accountability or guard rails on misconduct except insolvency and reputation loss.
Investors in 2019 had massive FOMO about missing out on the Next Big Publicly Traded Tech Company, and there were plenty of dubious firms getting big money with no accountability because if you insisted on accountability . . . well, there was another investor willing to step up, and what if HE got your big Facebook 2.0 stock payday?
Best thing you can do is try and guess whether it was really one of those circumstances, and diversify your risks and hopes.
I am terribly sorry Scott. But, I have stayed as far away from crypto as I possibly could and have told everyone I know about the problems I see.
I spent a lot of time working in the financial business and have been deeply involved in regulatory and other back room and plumbing issues. I knew and tried to warn people that crypto does not have the the systems and the people to run them that conventional financial institutions do. And those institutions are not spotless. but, there are so many overlapping (FRB, FDIC, FINRA, PACOB, NYSE, FASB, ...) regulatory authorities and pots of money that most people can spend their lives being FDH (fat, dumb, & happy) about their banks and brokerages.
This is not true with crypto. In 2008 several of the USA's largest financial institutions failed and a bunch of others were very close to going under. But, very few people actually lost money.
Crypto has none of those systems or back-ups.
You are not smarter about money than Warren Buffett, Charlie Munger, or Jamie Dimon. I know I am not. They all said crypto is not good. Believe them.
I think the people who are insisting that utilitarianism/consequentialism doesn't really tell you to violate commonsense moral constraints against lying/cheating/stealing when the upside is high, and that someone who engages in fraud to get billions of dollars to spend on effective charities is misapplying the view even if they were weighing the risks as well as they could, are not really owning the implications of their theory.
Yes, you should have some mistrust of your own ability to measure consequences, and that might give you utilitarian reasons to cultivate a tendency to e.g. keep promises even when your best guess is the alternative is a little better. And maybe that means "in most normal situations (or 'the vast majority of cases') following the rules is the way to go." But this kind of consequentialist justification for a buffer zone only goes so far, and when billions of dollars of funding for the most effective charities (and therefore millions of lives) are at stake, we are outside the buffer zone where the very general version of this point applies. The plausibility of this kind of deference to commonsense norms in cases like "should I cheat on my significant other if I calculate the EV is higher?" dissipates when the stakes get higher and higher.
I know why they wouldn't want to say it out loud, but I think what the utilitarian should think is "if someone really has good reason to think they can save millions of lives by defrauding (mostly relatively well-off) people and can get away with it, they absolutely should. If SBF was reasoning this way, then he made a mistake, not because he didn't respect simple moral prohibitions, but because he overestimated how long he could get away with it and underestimated the social cost to the movement he publicly associated with." True utilitarians really are untrustworthy when lots of utility is on the line! And they should own that consequence!
And rule utilitarianism is not a card that just any utilitarian can pull out in response to these cases - rule utilitarianism is a fundamentally different moral view - a much less popular view than act utilitarianism, with its own set of (quite serious!) problems. Most consequentialists in the EA movement are not rule consequentialists, and they can't just whip its reasoning out at their convenience - they would have to give up their moral view.
Everything I read, leads me to believe people only see Twitter for it's face value.
Am I the only person who thinks Elon Musk paid $44B for a real-time prediction market? Or perhaps its a tool to read public attitudes, or even—ala Cambridge Analytica—an opinion steering tool.
These tools are available if one only applies a little big data thinking to public perception models.
What a crowded thread!
On the issue of FTX: Why does anyone think that crypto has enough "real" value/potential/attractiveness to justify multi-billion dollar valuations?
The first explanation I heard for crypto was that it allowed secure, costless, untraceable transactions. But, "secure" is pretty well provided by credit cards. "Costless" is a nice concept, but credit cards and banks are pretty low cost, and who pays for all the server farms generating the blockchain if nobody ever pays anything? "Untraceable" seems mainly useful for criminals, and we've seen that, if law enforcement gets serious, transactions are actually very traceable, which ought to be obvious if the whole history of every transaction is in the blockchain.
So, it seems to me that crypto is, and always has been, a fraud. Maybe not in the legal sense, where people are knowingly selling empty sacks to gullible marks. But fraud in the practical sense that there isn't, and never will be, anything behind the smoke and mirrors.
Sorry if everyone else already knows the answers, but I never seen an attempt to address these in a serious way.
Sorry to everyone if this was already pointed out, but... isn't this St. Petersburg shit just martingaling? Its flaws as a strategy have been well known since literally the 1700s, if this is the quality of thinking in EA circles it's shocking that they were even trusted with one dollar of fake money. These people need traditionalism like pagans need Jesus.
I still feel like it's obviously acceptable to engage in unethical activities for the greater good, i.e. defrauding investors in order to send their money to deserving causes. The lesson here is that it doesn't serve the greater good to do it in a really obvious way that gets you instantly caught.
If anyone is still sleeping on Star Wars Andor, please give it a try. Fans of 'Tinker Tailor Soldier Spy' in particular. So much juicy Star Wars bureaucracy and just plain day-to-day life. Action scenes are used sparingly but when they hit, they hit hard. Blasters never felt so deadly.
EA needs to deal with the fact that its sociopathic framework will attract sociopaths.
I call it sociopathic because it frames problems and solutions in a very utilitarian maximization approach without much regard for individual emotions from others. This might be even the best framing for some problems. But, people with sociopathic tendencies will be attracted for such framing. And these sociopaths are the ones most prone to in the end do egotistical stuff that harms many people.
"The past few days I’ve been thinking a lot of stuff along the lines of “how can I ever trust anybody again?”."
Scott, in light of your previous Contra Resident Contrarian post, please forgive me if I find this a little bit ironic. I am sorry you and other people are suffering because of this. I hope any damage can be minimized. But perhaps it might be time to update toward being a little bit more of a skeptic when it comes to unverified claims that other people make?
How will you FURTHER update if the stories that Ukraine funding Congress passed was laundered back through FTX to Democratic campaigns turn out to be correct?
>5: In light of recent events, some people have asked if effective altruism approves of doing unethical things to make money, as long as the money goes to good causes. I think the movement has been pretty unanimous in saying no.
I haven't had significant interaction with the EA movement, but I *have* read The Most Good You Can Do - AFAIK fairly close to an EA Bible - and I remember thinking that Peter Singer leant quite heavily into this. I mean, sure, he didn't advocate fraud IIRC, but stock-market speculation (which he did advocate) is basically zero-sum and as such it boils down to a coat of rationalisation over "rob people and use the money better than they would"; it's a very short hop from that to the thinner coat of rationalisation that is "defraud people and donate the money to charity".
So, y'know, maybe the movement as it now exists mostly disclaims fraud, but it's not surprising that fraud shows up when it's a fairly-reasonable conclusion to come to from a principle outlined in one of EA's founding texts, and I consequently suspect "unanimous" is an overstatement (like, sure, the people who think fraud in the name of charity is great probably aren't going to *say it in public*, because lol fedposting, but they're around).
(I have actually made the "Singer advocates robbery" argument a couple of years back, but I hadn't thought of the fraud part before so feel free to discount that part as hindsight bias.)
Wishing everyone the best! If you’re among the folks impacted, remember to get yourself health resources you need. Positive investment in yourself now is a positive investment in all your future projects.
Here's a video I found interesting, about three thinkers who have influenced Putin's ideology: Ivan Ilyin, Lev Gumilev, and Carl Schmidt. The author's discussion of "Russian Lawlessness," the notion that the law in Russia has always existed in the service of the powerful, and never as a restraint upon them, is particularly worthwhile.
The author also mentions Alexander Dugin, who often comes up in modern Kremlinology, and argues that Dugin has not been particularly influential. He is more a popularizer of ideas than an originator of them. Ilyin, Gumilev, and Schmidt are the actual sources of the ideas driving the modern government of Russia.
This extract is from a local (Australian) paper today:
Shockwaves from FTX will be felt around the world, with FTX’s 1.2 million customers, including those locally, now realising they never owned any of the bitcoins or other digital currencies acquired through the exchange.
Among other things, FTX essentially sold a “paper bitcoin” or an IOU from the exchange, which barely had any asset backing. As investors tried to close out their funds, it became clear that nothing was there. In addition, it became a major custodian for start-ups to hold their cash raised in funding rounds. FTX is understood to have offered to back a number of start-ups financially if they used its facilities.
I'm crypto naive. Is this correct? If so, how is it that nobody tried to withdraw their holding over the entire lifetime of the fund to put it somewhere else? Or did they, and the bitcoins magically appeared (purchased on other exchanges) for these infrequent cases, so nobody was any the wiser?
Omnibudsman just posted a great writeup on cognitive gaps between socioeconomic and racial groups:
I was particularly surprised by the effects of maternal stress in-utero on IQ. Haven't dug into the source studies yet but curious what y'all think
I’m hoping to travel to Switzerland over the summer, and was hoping to at least pick up enough of the local language to at least politely ask if they spoke English. My understanding is that the “main language” is Low German, which is apparently a little different than High German, which is commonly taught in America.
1) Is this correct?
2) Is there a good beginner text for Low German to learn from for a native English speaker?
3) Any good shows I could watch in the language (ideally subbed in English)
edit just to note that yes, I expect many people will speak english, but it still seems polite to put in an effort to learn their language
Just so I understand your lessons learned (so far):
You rated the trustworthiness of FTX on par with that of Walmart, Google and Facebook. So you’re going to setup a prediction market to help you recalibrate.
You don’t see EA leadership as culpable to donor capture by a fraud because some professional investors who allocated a fraction of their portfolio also invested in FTX. Nothing to learn about aligning with a single donor or ceding brand messaging to a single donor. Or, portfolio theory, aka diversification.
You believe the SBF / FTX circle weren’t technically polyamorous or outright smoking meth, so let’s not be unfair to them. Nothing to learn about unconventional behaviors as it pertains to credit worthiness or operating expertise or key man risk.
You think SBF only started being fraudulent at some certain point and this wasn’t a systemic or cultural failure within FTX or the industry. So nothing to learn about the broader ethical issues pervasive in crypto. Or revising our priors (or whatever it’s called) when crypto firms consistently defraud people with tacit justifications from the community like “it’s not your keys it’s not your crypto”.
The smartest kids, from the most prestigious institutions, with an elite pedigree committed a most ordinary fraud by being conmen. Nothing to be learned about the kinds of fraud committed by intellectuals and the kinds of justifications they tend to use - saving the world or whatever.
There’s approximately 100 million people hearing about Effective Altruism for the first time and it’s in the context of EA being the philosophical motivation of the largest fraud of 2022. And also, the full weirdness of EA is on display as fodder for the upcoming Netflix series and movie(s).
You don’t yet see the connection between giving attention seeking TikTokers the benefit of the doubt and giving overeducated, virtue signaling Slytherins the benefit of the doubt. And why we don’t give people the benefit of the doubt when there are consequences on the line.
You are still in the earliest stages of grief. And I think this will be an incredible moment for personal growth for EA thought leaders.
One of my biggest concerns is that EA is itself playing a status game. You just have to glance through the forums to see the number of people who are more concerned about "optics for the movement" than real problems.
There are many people clearly (including Will, I would argue) using EA to build reputation above and beyond the goal of charitable work. I really don't see how this is so different than other forms of charity. Wordsmithing the rules doesn't change the fundamental status game.
Reasonable speculation, I guess (from your POV), but no, I actually live in one of the “safely” bleeding red areas of Michigan, just a few miles from where some of those White Nationalist militia wannabes got raided for their part in the plot to kidnap and execute Gov. Whitmer. Away from Ann Arbor, the rich people seem to love De Vos’s idea to just stop funding public education. I am interested in how you would help me convince my neighbors with the disabled teenager, that the Republicans in Congress who talk about doing away with Medicaid, Medicare and Social Security don’t mean just for “lazy blacks and immigrant trash”, but for them, too.
“You wanna know what effective altruism means? It means that you steal other people’s money while bragging about saving the world, while taking a big chunk for yourself. That’s what it means.”
At 49:18, from David Sacks on a reasonably popular podcast from some VC folks: https://youtu.be/Id7cNqwqt1I?t=2958
SBF being a large public face of EA means that it’s probably not a great time to embrace effective altruism as a brand, even if you strive for similar principles. Maybe even stronger: I’m not sure if the brand recognition of EA will be worth the negative connotations going forward. Probably worth reassessing in a month or so as the immediate storm passes.
My rule of thumb regarding donations from companies: they are used to clean image. If one does not find reason to clean it, it is because this reason is hidden. So, if one is going to accept a donation because of lack of moral concerns, this is an actual red flag.
If a billionaire does really want to be a big donor because of good reasons, they will do it on their own. Their goodness will be clearer to public eyes and they don't have to deal with shareholders' pushback.
I don't know about the specific finances of FTX, but I do know about audits. Auditors work based on norms, and those norms get better at predicting new disasters by learning from previous disasters (such as Enron). The same is true of air travel, one of "the safest ways of travel".
In fact, at the same time that we're discussing this, a tragic air travel accident happened on a Dallas Airshow. Our efforts might be more "effective" by not discussing SBF and instead discussing Airshow Security and their morality.
My guess is that the only way in which alarms would've ringed for FTX investors was by realizing that there was a "back door" where somebody could just steal all the money. Would a financial auditor have looked into that, the programming aspect of the business? I doubt it, unless specific norms were in place to look for just that.
I think that there are two specific disasters here:
- SBF's fraud, and
- SBF harming the EA "brand"?
If what we want is to avoid future fraud in the Crypto community, then the goal of the Crypto community should be to replicate the air travel model for air safety:
- Strong (and voluntary) inter-institutional cooperation, and
- A post-mortem of every single disaster in order to incorporate not regulation but "best practices".
However, if the goal is to avoid harming the EA "brand", then there's a profession for that. It's called "Public Relations".
PR it's also the reason why big companies have rules that prevent them from (publicly) doing business with people suspected of doing illegal activities. ("The wife of the Caesar must not only be pure, but also be free of any suspicios of impurity")
For example, EA institutions could from now on:
a. Copy GiveDirectly's approach and avoid any single donor from representing more than 50% of their income.
b. Perhaps increase or decrease that percentage, depending on the impact in SBF's supported charities.
c. Reject money that comes from Tax Havens.
c.1. FTX was a business based mainly in The Bahamas.
c.2. I don't know what is the quality of the Bahamas standard for financial audits. In fact, I don't even know if they demand financial audits at all... but I know that The Bahamas is sometimes classed as a Tax Haven, and is more likely that we find criminals and frads with money in Tax Haven than outside of them.
c.3. Incentivize their own supported charities to reject dependence on a single donor, and to reject money that comes from Tax Havens.
... Campaign against Tax Havens?
- Tax Havens crowd out against money given to tax-deductible charities, and therefore for EA Charities.
- There is an economic benefit to some of the citizens of the Tax Haven countries, but when weighted against the criminal conduct that they enable... are they truly more good than bad?
... Create a certification for NGOs to be considered "EA"?
- Most people know that some causes (Malaria treatments, Deworming...) are well-known EA causes.
- They are causes that attract million of dollars in funding.
Since there is no certification for NGO's to use the name "EA", a fraudster-in-waiting can just:
1. Start a new NGO tomorrow.
2. "Brand" itself as an EA charity
3. See the donations begin to pour-in, and
4. Commit fraud in a new manner that avoids existing regulation
6. Give the news cycle an exciting new story, and the EA community another sleepless night.
In fact, fraud in NGO's happens all the time. One of the reasons why Against Malaria Foundation had trouble implementing their first Give Well charity is that they were too stringent on the anti corruption requirements for governments.
It's in the direct interest of the EA community to minimize the amount of fraudulent NGO's, and to minimize the amount of EA branded fraudulent NGO's.
Extremely basic and obvious advice that probably no one needs to hear: Please don't have anything invested in crypto that you're not ready to kiss goodbye.
I'm pretty sceptical of the long term value of crypto, but even if you're a lot more bullish, you can't deny it's a highly volatile and risky business.
If you think it's +EV and you have the risk tolerance for it, more power to you. But be actually literally ready to walk away if it all explodes.
I'm grateful for Scott's internet thing teaching me about deontology and consequentialism. Is there a similar fancy term for the ethical system known as the golden rule? “Do unto others as you would have them do unto you.”
I note that it has a failure mode - what if you enjoy people treating you badly? Then you need to add an epicycle - “Do unto others as most people would have them do unto most people.” Which would be much more difficult, because of typical mind fallacy.
What precisely was the scam/fraud in FTX? I'm not clear on the details, and am not sure which parts are just bad financial decisions and poor circumstances, and which are clearly fraud.
I got no angle here, genuinely curious.
Given the well-deserved level of sympathy this stack and its readership has for SBF, I'll try to be as tactful as possible in bringing up another angle that ctrl-f does not yet turn up in the comments. This requires tact not because it impeaches his character (I would argue that it does the opposite) but because it uses tropes ordinarily associated with right-wing conspiracy theories. However, given the amount of updating going on this week it seems fair to at least acquaint ACX with accusations that he was laundering money via Ukranian corruption (for instance FTX running a donations-for-ukraine site that took in $55 mil before before being shut down and deleted -- still on archive.org tho) and was making promises of larger donations to democrat candidates (larger than his $30 mil previously, already the second-largest individual after everyone's favorite bugbear) in amounts not really possible for any sane expectation of how crypto makes money.
How would this be better than simple greed and ordinary fraud? Well, if he was running a racket on behalf of purely political money we have to ask whether or not we care more about campaign finance than we care about winning, and if he got pulled in over his head in old-fashioned dirty money and tried to move some around to keep things going and ended up losing actual people's cash in consequence, that at least demonstrates that he didn't start out looking to defraud them. The analogy is more that someone starting a small investment opportunity also starts working for the mob (after all, they're not violent anymore and are just doing extra-legal finance maneuvers) but ends up losing everything. The running for the bahamas at the end with as much as he can carry? Eh well honestly once the panic sets in none of us are our most altruistic.
What are the base rates for fraud?
I would suggest it is us higher for businesses involved in crypto or that lobby heavily or that suddenly make lots of money and possibly higher in ones that claim a noble purpose.
But with scientific papers, charities, exams, memoirs etc whenever people investigate them fraud is endemic as Scott has shown lots of times.
Maybe I am being overly cynical but you should probably assume a decent share of your friends, businesses you use, charities you donate to, scientific papers that you respect or have entered the zietgeist, books you read are completely dishonest and fraudulent.
Software Engineer looking for work. ~15 years experience, worked at a FAANG company for a while, also have worked at smaller places.
Let me know if you are hiring for anything interesting. Prefer work that benefits the world or offers opportunities to master new skills.
A few months ago, FTX bought a stake in my former employer. At the time I was a little miffed that I wasn't able to participate in the transaction, it's a private company that doesn't pay dividends and I can't easily sell it outside of an arranged transaction like this. Now I'm glad I didn't end up with Ponzi scheme blood money.
It is humbling to know that my ex-coworkers, people who have been in the finance industry for years, people I respected and thought had well-honed bullshit detectors, still fell for FTX. Personally I've never touched cryptocurrency but figured, hey, if "my people" like this Bankman-Fried guy, he's got to be less scammy than the rest of them, right? Ha ha no.
We're all idiots now.
There have been lots of frauds over time, and there will be many more. It turns out that SBF was especially good at branding himself with EA and thus avoiding some scrutiny, but Elizabeth Holmes, Bernie Madoff, Enron, and many others have done similar stuff without any particular philosophical trappings. I suppose some introspection in EA circles is warranted, but most of the self flagellation seems somewhat beside the point. Now if there were a trend of EA types defrauding people, that would be a different story. For now, this seems like a one off. Bernie Madoff is Jewish, and his fraud (bigger than FTX, by the way) didn't lead to Jewish people reevaluating the morality of their religion or ethnicity, nor should it have.
Now crypto is a different story. There IS a trend of crypto people defrauding everyone they come in contact with, and at this point my take is that the whole space is quite rotten. If anyone needs to do some soul searching, it's crypto. Or maybe just a few prison sentences will do.
Regarding point 5 (EA people have always condemned doing bad things for good reasons), how different is this from corporate mottos and the like? Every corporation in the world has some sort of statement somewhere saying 'we will always put the customer first' and every corporation in the world also transparently acts in a way at odds with this.
When you read a story about a corporation absolutely shafting a customer and the media print a response from the company spokesperson saying "[Corporation] is deeply committed to the principle of [not doing that]", most people don't treat that as much of a defence of the company - the bad actions speak a lot louder then the pretty words.
In this particular case I think EA is clearly more committed to the principles of Rule Utilitarianism than a random company is committed to 'Enbedding Excellence in Every Widget Sold', but I do question whether they should get any credit for that unless they actually enforce Rule Utilitarianism as a condition of recieving EA money (or whatever - I mean some concrete step that elevates Rule Utilitarianism above other plausible Consequentialist frameworks)
Don't be too hard on yourself. The type of Scott Alexander who was thoroughly skeptical enough to detect the deception would be spending a lot of mental energy on that and possibly getting a bunch of false positives for years. Since you weren't participating in the fraud or making more fraud possible, I think you have little to worry about. It's a terrible situation, but I don't think anyone should seriously "Lower [their] opinion of [you] accordingly." (even for just recommending it in a hypothetical situation) Even if high members of EA found SBF really shady, I still think it's okay to take a shady persons money from them to spend on charitable giving or important research. It seems like the alternative is to let him keep his money and your charity remain underfunded.
Plus ca change, plus ce la meme chose - “the more things change, the more they stay the same” , (finance+innovation+human brokenness = failure and catastrophe - it’s happened for a long long time - what’s that other saying that innovators mention? “This time it’s different”
Seems like many EAs have decided that they never bought into consequentialism/utilitarianism after all. And sure, if you look through the thousands of pages written, you’ll find some hedges against doing intentional harm for the greater good.
Unsurprising that nobody has the chutzpah to say the quiet part out loud: If SBF succeeds in destroying the credibility of crypto, ushering in regulations that prevent anonymous ownership of huge piles of capital on blockchains, that would have huge implications for AI safety, since it limits one vector through which AI may control significant resources.
Call it a “pivotal act” if you will.
Now that crypto is in the news again (for something bad, like always), I have to say I'm surprised that crypto people still believe in this garbage. Like, how is it that people still say that this is the future of currency? How many more scams does it take to prove to you that crypto has no practical use (other than scamming lol) and is value-less? I mean just look at the whole community, its literally all a Ponzi scheme (see https://ic.unicamp.br/~stolfi/bitcoin/2020-12-31-bitcoin-ponzi.html by Jorge Stolfi)
Why would the economy of the Bahamas suffer because of this?
I guess FTX was a decent size company but I can't imagine it occupied more than an office building, maybe a few hundred employees. That's small compared to the size of the Bahamas, surely?
If your market goes above 33%, do you then have to meditate until you achieve ego death, thus severing all ties with yourself?
If anyone wants to support research on oogenesis and meiosis, I'm looking for funding. I was awarded $300K from FTX regranting but it looks like they won't be able to actually provide it. My current funding will run out in August, and if I don't get more (at least 100K) I'll have to severely cut back and lay off my assistant.
My email is metacelsus at protonmail dot com
The FTX situation ties in well with the fraud triangle, almost too perfectly: rationalization, pressure, and opportunity. It was all there. Donald Cressey discussed the idea of a nonshareable financial problem as part of rationalizing fraud, and I see that as a situation where SBF and crew must have rationalized using customer funds as a way to avoid admitting their own failures. There are no great lessons from this, it has happened before and will happen again.
Many people here seem to be thinking of SBF as a brain in a vat making moral calculations and failing. I see a very young man surrounded by other very young men and women in close but complicated relationships, isolated in a penthouse in a country far away from their parents and other relatives, possibly using various drugs that are sometimes known to interfere with good decision-making, and swamped with so much money that everyone from Bill Clinton on down was kissing their asses.
I defy any of you to make excellent choices in those circumstances. This is not an excuse; it's an explanation. It's not an excuse because you can always remove yourself from those circumstances when you start to feel yourself sliding down the wrong chute. But it is a very solid explanation.