254 Comments
Comment deleted
Expand full comment

Paula I don't think the field of posting spammy links to nudes is a neglected cause area.

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

This seems insulting for no reason. I fear that the SSC/ACX comments are going downhill, likely because of decreased political diversity.

Expand full comment

While I agree that marxbro is generally making the comment section worse, in the context of this thread the insulting joke was neither necessary nor kind (and so we should strive to do better).

Expand full comment

I don't make the comments sections worse, I'm one of the few commenters here who fact-checks Scott when he writes about left-wing topics.

Expand full comment

Mea culpa: it is true that my joke was neither necessary nor kind. I do think there was a bit of truth to it, but 1 out of 3 is not enough: I’ll delete it

Expand full comment

Props for recognizing it as as such (and I agree on the truthiness of the statement)

Expand full comment

How was the statement true?

Expand full comment

I believe that you repetitively bring up the same small number of points, in a generally aggressive way.

To look at the comments on this post:

1) 'But is it a worthy endeavor from a Marxist perspective?'

If you have an opinion on the worthiness of this endeavor from a Marxist perspective, it seems like it would be much more interesting if you explained why the endeavor was good/bad. Just mentioning 'Marx?' doesn't seem fruitful.

2) 'There was never any mob and Siskind was just complaining that the New York Times had the temerity to print his name in an article.'

Again, this is an argument you seem to bring up whenever it's vaguely relevant (or when its not). I think Scott's additional arguments against the piece as deliberately mischaracterizing him seem at the very least plausible, and implying that there's no other reason that Scott could object seems like you're just trying to start shit. To look at your comment from the gate test:

Is it true? probably not

Is it kind? no

Is it necessary? probably not

To go back further, 'Secrets of the Great Families' has two civil and relevant comments by you. Props, no objection from me.

In 'Epistemic Minor Leagues', you have a semantic argument that while Marx believes Communism is the inevitable outcome of Capitalism, this doesn't amount to historical determinism.

After reading this more, I'll quote John Wittle's excellent comment in full, and then never reply to you again:

"I would say that people on SSC have spent more time arguing over whether Marxbro is a troll than they have on any other individual topic. The consensus is that he is not. he has accounts on several other sites where he does the same thing he does here: defends marx unconditionally.

Generally he responds to things that have very little to do with marx, and wrenches the conversation in the direction of marx, and then continues pushing forever. His favorite topic in these comments is lambasting Scott for being critical of Marxism without having considered xyz arguments. He comments this in some form on nearly every single post.

And it always leads to a very long argument that most people are extremely tired of having. Both the object-level "arguing with marxbro" and the meta-level "arguing about what Scott should do about marxbro to keep him from ruining the comments section any longer"

"

Expand full comment

This is still an implicit insult and more ‘piling on’ behavior.

Think about it for a moment. I’m pretty sure you’ll understand what I’m saying.

I wish I could say it was childish but I see it all too often in adults.

Respectful consideration of good faith arguments even if you happen to disagree is a major part of the appeal of ACX.

This ganging up on the ‘other’ no matter how obliquely done is screwed up.

Expand full comment

You’re assuming “good faith arguments” which deserve “respectful consideration.” If I were interested in continuing this discussion, I would give reasons why and examples of how those assumptions are not accurate in this case. But I’m not, so I won’t

Expand full comment

At some point, it seems reasonable to no longer believe arguments are being made in good faith. For example, if someone replied to literally every thread with a statement semantically identical to "But what about climate change?" and then made tangential follow-ups when questioned, I'd be forced to conclude they were arguing in poor faith (or otherwise not worth the time). The same would hold true for someone who did the same for 10% of threads, 1% of threads, and most likely .01% of threads. I believe that marxbro is in a similar position.

I also think we're quibbling about an awfully small difference in implication. Your comment that the insult was neither necessary nor kind has heavy Gricean implicature that the insult was true, so I would prefer a slightly less sanctimonious admonishment about avoiding oblique insults.

Expand full comment

Paula's grant proposal fits the conditions specified by Scott surprisingly well.

Online porn addresses global poverty (for poor people, finding online porn is cheaper that paying a sex worker) and global health challenges (you can't get AIDS from watching porn). Porn is often a driver for modern technology (source: https://www.youtube.com/watch?v=zBDCq6Q8k2E ). It helps people stay at home during pandemics...

Expand full comment

this made me laugh :)

Expand full comment
Comment deleted
Expand full comment

This + lobbying for more MAOI use would be actually pretty impactful

Expand full comment
Comment deleted
Expand full comment
author

Thanks for the kind words. I don't have any plans to create a network; maybe I should figure that out but I can't guarantee it. Does anyone know how this is usually done?

I suppose if you wanted to game the system you could apply for a very small grant to improve some aspect of your operations, and check the box to be included in ACX++, and then I would have to advertise you!

Expand full comment
Comment deleted
Expand full comment

I want to signal boost this - particularly for start ups or individuals/small groups operating outside of accelerators/universities, funders play a much bigger role than just money - they can provide connections, sounding boards, boring-but-essential admin tips, etc.

Expand full comment

The network already exists I'm typing stuff into part of it right now. If the ACX-funded subnetwork wants to start their own [whatever is recommended as an alternative to

a discord server] then they can. And i'd not expect this endeavour to gain the institutional-nature overnight.

Expand full comment

At minimum, setting up a Discord or Slack for all the funded projects and seeing what people use it for seems like a straightforward and good step. YCombinator may be the premier example here, although they have a fairly complex internally developed mashup of LinkedIn and Facebook for the job (known as BookFace)

Expand full comment
Comment deleted
Expand full comment

FAQ above says applications are closed in 2 weeks.

Expand full comment

Might be good to put a date on that, just for clarity. Especially for people who don't see this post on its first day.

Expand full comment

This is wild. Excited to see where this goes

Expand full comment

This is truly one of my favorite developments coming out of the rationalist community/Progress Studies/EA sphere.

What's the point of all this money sloshing around the economy and crypto if it's not gonna fund moonshots?

I thought about starting a rationalist community DAO, bootstrapping the token value a la any of the tokenomics mechanics printing money in crypto, create a community fund, and vote on projects to disburse tokens to.

Expand full comment
author

My hope is that this year I do a very normal grant round to set a baseline and make sure I can do this at all, and then next year I figure out some kind of crazy innovative idea like that (though probably with less crypto).

Expand full comment

>make sure I can do this at all

My preregistered hypothesis is that this becomes one of the most successful and important things you do (as determined by you subjectively).

remind me! 10 years

Expand full comment

Do we have remind me bots for substack???

Expand full comment

No, tongue in cheek. But my hypothesis is genuine.

Expand full comment

Now I’m wondering what it would take to set one up. Seems useful

Expand full comment

Apply for a grant to develop one

Expand full comment

Given that substack doesn't have an API ... probably a web scraper, a server and a droplet or two? This sounds like a weekend project at the outside for an interested hacker (especially if we only care about setting it up on this substack).

Expand full comment

Let me know if you get anywhere on the rationalist DAO idea.

One thing I've been mulling over recently is launching a dog coin which is actually a Trojan horse for distributing funds to EA causes. The amount of money that can accrue to a (successful) dog coin is staggering. The trick is just to figure out how to make it more of a Schelling point than all the other trash floating around in that category. It would probably help that we could get Vitalik and a few other crypto "celebrities" on board - I think a lot of people would be glad to see the stupidity of the current market put to good use for once.

Expand full comment
founding

Google rainbow rolls nft. Similar idea

Expand full comment

> starting a rationalist community

Maybe something set on 1 km^2 of farmland (to give lots of room for expansion) with a starting population of 100-1000. The intention being that it would grow to a population of 10,000+

It could be set up as model city that would serve as an example of what a polity under Archipelago might look like (https://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/)

It might include the creation of as Rationalist Religion (in the sense of rituals/beliefs/values for making a community cohere NOT in the sense of a cult).

Maybe cryptocurrency could be involved somewhere.

It would have a constitution meant for other organisations with similar but not identical goals to modify and reuse. (So eventually there might be lots of separate Archipelago communities (e.g. an LGBT one, a social conservative one, a Georgist one, etc)-- all with their own separate principles but all agreeing to Archipelago Communitarianism).

Expand full comment

I tried this once. The twice five miles of fertile ground were really pleasant, but the woman wailing for her demon lover eventually ruined it.

Expand full comment

Greg Cochran has suggested that if you replace nitrogen with helium in the air you breathe you might increase cognition and alertness. If this is tested and proved true there would be enormous social benefits as we could put scientists (and AGI safety researchers!) in sealed rooms with such air. I've been trying to get someone to test this idea. As an economist, no same person would let me mess with the air people breathe. But if anyone reading this has the qualifications to run the experiment perhaps ACX grants would be the right place to apply for funding.

Expand full comment
author

I can't deny this is my kind of experiment, but I feel like it would be more kabbalistically appropriate to apply for a Helium Grant https://www.heliumgrant.org/

Expand full comment
Comment deleted
Expand full comment

From the website:

Q: Are you ever going to do Helium Grants again?

A: I don't have any plans to start up again, but you never know.

Expand full comment

Nadia has ceased to make Helium grants. She's working on a new book, I think.

Expand full comment

It's a nice idea, but my application for a MacArthur Grant to invade the Philippines has been repeatedly declined.

Expand full comment

Haha!

Expand full comment

This strikes me as remarkably difficult and unproven relative to old fashioned stimulants (ritalin/adderal/modafinil). But I'd love to see it studied anyways!

Expand full comment

I’m just imagining these really smart people talking about world changing ideas with high squeaky cartoon voices.

Expand full comment

Greg has suggested that if it works speaking with a high squeaky cartoon voice would become a sign of power and authority.

Expand full comment

You didn't mention why it's worth looking at: nitrogen narcosis (https://en.wikipedia.org/wiki/Nitrogen_narcosis). Basically, divers that dive with nitrogen find that at higher partial pressures the nitrogen is an intoxicant. Every gas except for helium & neon is an intoxicant to some effect: Xenon at low pressures can knock people out. So if 2 atm of nitrogen pressure makes you tipsy, then what does the 1 atm we're all experiencing all the time do? And if we got rid of it by replacing it with helium, then could we make people knurd?

Heliox wouldn't be that expensive to test out. Professional divers use it, and there's medical uses as well. You can either find some diver's tanks and hook people up, or get some medical grade mixers to mix oxygen & helium from tanks and run a nasal cannula.

The more interesting question is what you'd want to test. Let's suppose we can get, I dunno, 30 people to wear nasal cannulas for 2 hours. I'd do something like this: set up the mixers appropriately so that oxygen is constant @ 21%, but you can swap the remainder smoothly between helium & nitrogen. You set things up so that people are on normal air for 1 hour and heliox for the other, with which is first being random.

As for the tests, maybe some combination of standard Raven's matrix questions and reaction time tests?

Can't seem to find heliox pricing anywhere, but I suspect you could do this for <$100K.

Expand full comment

I think you could self experiment for <$100 and notice it if there was any big effect worth noticing. I found a study showing a massive improvement in cognitive function of divers under ~3.6atm of pressure using heliox vs compressed air. (https://europepmc.org/article/med/32176950) but no study at 1atm.

Expand full comment

The Stroop Test? What a can of worms this opens. The things I learn reading ACX, I tell ya.

Expand full comment

Here is a 1975 study of replacing nitrogen with helium at 1 atmosphere:

https://europepmc.org/article/med/1130736

Expand full comment

> So if 2 atm of nitrogen pressure makes you tipsy, then what does the 1 atm we're all experiencing all the time do?

It doesn't, most people need 4 atm (or 0.7->2.8 partial) to experience it.

Expand full comment

It's a very implausible hypothesis anyway, contrary to the way almost all physical and biochemical systems operate. Systems in stable equilibrium, and subject to natural selection, are optimized for performance under normal conditions -- whatever they are, and one thing we can be sure about for humans is "normal" includes 0.8atm of N2. Deviations from normal, in either direction, can reasonably be expected to degrade performance.

Expand full comment

This leads to the prediction that since human bodies are optimized for gravity on earth, measured athletic ability for humans on the moon would be lower than that on earth.

Expand full comment

And so it would. I am confident that the winners of the World Cup would play far less competent soccer on the Moon. Gold-medal winners in the vault and floor exercises on Earth would probably injure themselves doing relatively simple routines on the Moon. Even Usain Bolt would probably run a slower 200m on the Moon than on the Earth, technique and timing being far more important in sprinting races than people ordinarily think.

I take it you are referring either to the crudest of possible measures of "athletic ability" like how far you can throw a ball or how high you can jump, or you are think of athletes who trained and got used to lunar gravity. The former is just sort of silly, but in the latter case I am not convinced that even with extensive training Earth-born athletes could out-compete native Lunar-born athletes (if those existed).

Expand full comment

For what it's worth, we can also note that in the earliest years of spacelight people hypothesized that microgravity would be good for people, because the heart had to work less hard, there'd be less wear and tear on the body, et cetera. But as it empirically turned out, microgravity is on balance bad for the body and degrades the health, and these days we recognize that a body tuned for 1g does *not* do well in 0g, contrary to the naive hypothesis.

Expand full comment

Counterexample: plants grow faster under increased CO2. This is seen both in the fossil record as well as in direct experiments.

Expand full comment

I'll gladly self-experiment with a 79-21 mix of helium and oxygen, but I don't need the money. I'm surprised if divers and astronauts don't already do this, since helium is so much lighter than nitrogen. It's $7.57 per cubic meter, and we breathe 11 cubic meters per day, so even if the helium was not recycled at all, the cost would be only $7.57*11*0.79 = $66/day

Expand full comment

Try not to kill yourself, which is quite possible if you're playing with pure helium tanks. Get an oxygen tank too, along with a medical-grade gas mixer. Remember that the asphyxiation response is caused not by lack of oxygen but by the presence of carbon dioxide. If you're breathing a 5% O2 95% He mix, you'll happily pass out and die without ever feeling short of breath.

Expand full comment

People on Everest seem to notice the lack of oxygen

Expand full comment

I think AW is right about this. I was once in a room where someone left a nitrogen tank open - fire extinguisher propellant - I started to get light headed and was close to keeling over when someone noticed the leak and turned it off. No distress at all though. It’s the CO2 build up that causes distress.

I’ve read that death penalty states ruled out using nitrogen gas as a method of execution because death by that means is too pleasant.

I would think too much helium would be the same.

Yeah, thin air at high altitude is noticeable, but that is a different effect entirely.

Expand full comment

> I’ve read that death penalty states ruled out using nitrogen gas as a method of execution because death by that means is too pleasant.

Any polity that has the death penalty and uses that argument is not, IMO, morally qualified to have the death penalty.

Expand full comment

Okay, I checked on this. Alabama, Oklahoma and Mississippi have authorized execution by Nitrogen hypoxia. No state had used it yet as far as I know.

The business about states not using it because it is too pleasant may have been an editorial comment in an earlier article. The method is expected to result in sedation followed possibly by euphoria. I think the author of the recalled article thought this would go against what death penalty advocates wanted from an execution.

So some speculation on that writer’s part and imperfect memory on my part.

Expand full comment

t sounds unlikely. Generally states that want to apply the death penalty are hounded by people making disingenuous arguments about how cruel <whatever method is proposed> is. While the same people also work to make alternative methods impossible.

Personally I'm generally opposed to the death penalty because there can be irreparable miscarriages of justice. But if you ARE going to kill someone judicially, there seems little reason to sweat over the fact that death is generally unpleasant, whether you are being executed or not. Lop off their heads or whatever works, in most cases it's better than their victims got.

Expand full comment

Not the same way. When you're at altitude you notice you're out of energy, and often out of motivation -- a serious problem in mountaineering by the way -- but you don't feel like you're suffocating.

Expand full comment

In fact, one of the biggest dangers of mountain climbing is *not* noticing your body is low on oxygen. You just get dumber and dumber until you die.

Expand full comment

Now *this* is the kind of Mad Scientist malarkey I'm here for!

Pros: come up with genius insight into pressing problem

Cons: people too busy laughing at your Donald Duck voice on helium to take you seriously

Expand full comment

On the Internet, no-one knows you're a duck.

Expand full comment

I laughed at this one

Expand full comment

Oh well then you might enjoy Cody's Lab where he breathes *all* the noble gases. The results with helium and krypton are most fun, I think:

https://www.youtube.com/watch?v=rd5j8mG24H4&t=12s

Expand full comment

I think you are a really good person and I think you are extremely charitable for doing this.

But is this a better use of the money than standard effective charity donations? $250,000/~$3200 = ~78 lives not saved. If it is a better use of the money, then wouldn't it make sense to put your 10% into the research grant? After $250,000 does the research have diminishing returns such that it is better to give to the standard effective charities? It might be worth fleshing this out some more if lives are on the line.

Also, it seems like you should sell more NFTs for sure. Why not? It produces carbon which you could offset. But...I wouldn't offset the carbon, I would just save more lives.

Expand full comment

I think a tiny, tiny fraction of EA spending on moonshots that have nonzero probability to dramatically increase QALYs, or actually save lives, makes this a really good, in fact underrated, idea.

That's a generic/a priori argument, and I understand the demand for providing those probabilities rigorously when the opportunity cost is lives saved. But that's the thing: venture capitalists, philanthropies, smart people and capitalism generally have all been attempting to predict the impact of moonshot ventures for a long time and it's *really, really hard* to predict what will work and what won't, so at some point one has to make judgment call and say, time for some moonshots.

Expand full comment
author

Good question.

One answer could be that since rationally I should fill up the most expected-effective category before moving on to anything else, and we're talking about categories much too big for any individual to ever fill up, this proves that I'm not donating fully rationally and instead trying to satisfy psychological needs (spending some money to feel risk-seeking and innovative, then other money to feel certain that I'm doing at least some good). I can't argue with this, but I think there are compelling arguments for either being the higher-utility one, and although they're probably not *exactly* equally compelling, it's enough that the value of satisfying my psychological needs is higher (to me) than the expected value gain of doing whichever one turns out to be better.

Another answer is that since I'm not especially rich but I am especially a public figure, the value of almost everything I do is as a role model. I'm role-modeling pledging 10% of my money to effective charities (which I think more people should do), and I'm also exploiting my public-figure status to get proposals for exciting new ideas that richer people than I am can fund. These don't trade off against each other the same way that money does.

(maybe you could counterargue that I should role model being rational and not putting my psychological needs above relatively-small-but-absolutely-large gains in expected utility, but I'm not sure people would listen to that message. I wouldn't!)

To sell NFTs, I would actively have to ask for them or advertise them, and I think the reputational damage this would do is more costly than the money I would get. But if anyone wants to unsolicitedly buy an NFT from me, sure, whatever, send me an email as long as you're willing to pay enough for it to be worth my time (I am technically unsophisticated, you would have to walk me through the process, and it would take a while).

Expand full comment

Yet another answer is that since you are already (separately from this, if I understand correctly) donating 10% to charity, this doesn't actually funge against saving lives but against your metaphorical beer fund, and therefore you are completely off the hook for that particular question as per one of your old SSC posts (this one, I think: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/).

Expand full comment

"maybe you could counterargue that I should role model being rational and not putting my psychological needs above relatively-small-but-absolutely-large gains in expected utility, but I'm not sure people would listen to that message. I wouldn't!"

You should listen to that message tho cuz it is the correct one in this situation.

Expand full comment

Your argument isn't very convincing. I think the problem comes from conflating "rational" -> maximizes utility, and "rational" -> uncompromised by human emotions. Both senses of rational are ideal rather than rational, but even in the mixed and muddied real world the two senses output very different results. It seems like Scott is trying to be rational in the sense of maximizing his utility by meeting his emotional needs, while you are trying to be rational by not letting emotional needs get in the way of clear numerical calculations. Anecdotally, I cannot recommend utilitarians try to discount/ignore their emotional needs. It works no better than trying to discount/ignore physical needs.

(Of course if saving as many lives as you can is your big emotional need, then go right ahead!)

Expand full comment

Honestly, I think this idea is possibly the maximal EA for Scott, regardless of whether it also happens to serve his emotional needs.

Ord's thesis in The Precipice is, after all, that if you value humanity's future at some non-hugely-discounted rate then X-risk swamps everything. The chance of something coming out of here that substantially ameliorates an X-risk is low, but given that X-risk charities are mostly doing similar things to this anyway and ACX has more penetration than most of them it's probably the biggest splash in that pool Scott can make.

Expand full comment

> rationally I should fill up the most expected-effective category before moving on to anything else

This doesn't sound right to me.

It's not true, any more than a rational investor seeking to maximise his returns should throw all his money into whatever he thinks will have the highest return. That's a silly way to invest; instead a rational investor should acknowledge his own ignorance about which investments will pay off, and have a portfolio of different investments in different areas with different levels of risk.

(Project idea: apply Modern Portfolio Theory to Effective Altruism projects.)

Expand full comment

Also, consider comparative advantage. Any semi-rich jerk can give $250K to random charity, but Scott is uniquely well placed to run this kind of project, because he has a lot of smart readers and all that.

If someone else (e.g. me) tried to run the same sort of grant project it would go much worse, because I'd probably have to resort to advertising on lamp posts in my local area or something and I'd attract a much lower quality of applicants.

Expand full comment

I think there is a big difference between investing and charitable giving that makes this analogy not as applicable.

When doing personal investing, you are trying to maximize your expected utility, but the way you operationalize this is by trying to make money. If your goal was solely to maximize your expected amount of money, you really would choose the one stock with the highest expected return and ride it until some other stock began to look more promising. But since your goal is actually to maximize utility, and since utility is a sub-linear (perhaps logarithmic) function of money, you act more safe and diversify.

When charitable giving (as an effective altruist at least), you are trying to maximize your expected positive impact on society, and the way you operationalize this is by finding the interventions with the highest expected positive impact. There's no disconnect here like there is between wealth and utility in investing. And the way to maximize your expected positive impact on society is to find the most promising cause, and donate all your money to it (that is, unless you donate enough money for there to start being diminishing returns, which is not a problem for most smaller-scale donators).

I say all this as someone who does diversify his charitable giving somewhat, but I'm not sure if I'm doing the right thing.

Expand full comment

I think you're confusing "rational" with "return maximizing". The point of diversification is hedging against losses. A return-maximizing investor should sink all their money in the offer with the highest return, possibly lose it all and say "Shrug, it was the correct strategy anyway."

Expand full comment

If you apply the reasoning of "I should fill up the most expected-effective category before moving on to anything else" to an investment portfolio, you would end up with a diversified portfolio. You first $1 goes into, say, a stock index fund. The expected utility of an additional dollar into stocks isn't as high as the expected utility in bonds, because adding marginal risk reduction increases expected utility more than adding marginal expected return. So your next $1 goes into bonds.

It's debatable when and to what extent charitable projects experience diminishing marginal utility. But to the extent that they don't, you should put all of your dollars into whatever category has the highest expected utility.

Expand full comment

The Kelly Criterion says that your bankroll growth rate is maximized when your bet sizing maximizes E(log(bankroll). If you go all in on one investment, there is a chance your bankroll goes to zero, and the log of zero is negative infinity, so E(log(bankroll)) is not maximized.

In charitable giving, you just try to maximize E(utility) instead of E(log(utility)). There are enough other people doing charity work that THEY provide the diversification. So if an omnipotent being offers me a coin flip, wherein if I lost the flip I get nothing, and if I won the flip I'd get 10^69420 QALYs for each dollar I wagered, I am definitely going all in.

Expand full comment

Forgive myth mathematical illiteracy, but what difference does the log operator make here?

Expand full comment

A logarithmic utility function makes you more risk-averse. If there's a coin flip that doubles/halves your bankroll, the linear utility maximizer would always do it because E(bankroll) is 1.25*pre_bankroll. But the logarithmic utility maximizer would be indifferent.

Logarithmic utility is appropriate whenever your future income expectation is directly proportional to your future bankroll, which is a halfway decent approximation both investing and professional gambling. But it ignores living expenses (which should make you more risk averse) and it ignores diminishing returns where larger investments result in smaller rates of return (which should make you more risk-tolerant, and which are pervasive in both investing and professional gambling). Most professional gamblers bet half-kelly or less but I'm on the more risk-tolerant end of the spectrum.

Expand full comment

An investor is trying to maximize his own risk-return tradeoff, thus his personal investments need to be diversified. But an Efficient Altruist isn't trying to maximize the risk-return tradeoff for the projects he funds. Since the benefit accrues to the world, the proper portfolio is all of charitable giving, or something like that. Since the amount of money Scott is giving is small, relatively speaking, then the "rational" thing to is almost always to put it in one place, to get the world portfolio closer to optimal.

This argument gets weaker if you're talking about Bill Gates-level donations, where you might actually fill up a bucket to it's optimal position.

Expand full comment
founding

diversification is called the only free lunch.

Expand full comment

Probably progress probably trumps direct charity in the long run... although I'm sure EA has a good response to this

Expand full comment

(1) As Roko pointed out on Twitter (https://twitter.com/RokoMijic/status/1459121955819425796), as regards carbon emissions, it isn't very consistent to own Bitcoin/Ethereum or any other Proof of Work (PoW) cryptocurrency, while anguishing about NFTs in particular. (Assuming you do, I don't know). Especially since these NFTs would presumably reside on Ethereum, which is moving towards non-polluting Proof of Stake next year, so if anything driving up activity on it increases the chances that it flips Bitcoin sooner (this will be good from a climate perspective since $BTC has no plans to move away from PoW).

(2) Or you could, even today, sell your NFTs on a non-PoW chain such as Solana, which is probably the chain that has the best chances of flipping $ETH in turn. NFT market on Solana: https://solanart.io/

(3) You could also buy $KLIMA (https://www.klimadao.finance/) which buys up carbon credits, in amounts commensurate to whatever you think the carbon impact of any particular NFT you mint and sell is.

(4) Last but not least, customary reminder that moderate climate change towards a warmer world is almost certain a net good, but I suppose that's a bit OT here. In any case, the US military alone emits far more carbon than all cryptocurrency combined, and is also probably a net negative for global welfare whereas crypto is a massive net positive. From this perspective, wouldn't it be more moral (if admittedly also more dangerous) to try to maximize tax evasion?

Expand full comment

> Last but not least, customary reminder that moderate climate change towards a warmer world is almost certain a net good

I tried to track down this claim, and I found: an article by Matt Ridley, which cites a paper by Richard Tol; multiple corrections to the Tol paper due to data entry errors, which Tol says do not change the conclusion; and two posts by Andrew Gelman claiming that the paper and correction have additional errors that throw this all into doubt.

Is that indeed the source for your claim? If so, the information sounds woefully inadequate (like the question is under-researched, or has been summarily ignored due to apparent bogosity, or both).

It hardly needs to be said that this is a fringe belief, and unsurprising that Matt Ridley and Richard Tol are both accused of being climate science deniers (etc.), but whether that is a cause or an effect is unclear to me.

(Matt Ridley is a journalist, libertarian, and viscount. Richard Tol is an economics professor. Andrew Gelman is a statistics and polisci professor.)

Expand full comment

I commented in greater depth on the old AGW specific thread. But TLDR: Many of the risks are overstated. Coastal megapolises are sinking 10x+ faster due to groundwater depletion than sea level rise. Warmer world = wetter world (cold produces drought, which is the real killer of civilizations historically), with a greater fertilization effect.

No, it's not based on any of those people, but paleoclimate evidence (as in, things that actually happened, as opposed to speculative models trying to incorporate many kinds of phenomena that are very poorly understood even in isolation, let alone as part of a complex system), e.g. Sahara being a verdant garden hosting elephants and hippos when the world was 2C warmer. Deep Future by Curt Stager is a good introduction.

Expand full comment

My biggest problem with entirely rational behavior is that it seems so, well… joyless.

Mr Spock was only fun because we could laugh at him. Besides, that hot head Jim Kirk usually had a better idea. [mostly tongue in cheek]

Expand full comment

So the point is, I guess that I’m certainly not going to find fault with an attempt to do good with an approach that might possibly be construed as non optimal.

Expand full comment
founding

So you guys practice a joyful rationality? I’m honestly a bit confused

Expand full comment

It seems pretty Spock like at times.

Expand full comment

I mean if you get down to assigning a numeric value to each action, isn’t that kind of like Spock?

Expand full comment

Okay. I’ll read the LW links

Expand full comment

I think I can compound my wealth at 15% a year, while the cost of utilons probably only goes up as fast as 3% expected inflation + 2% global real gdp per capita growth*. So for each year I delay, I can buy 10% more utilons. I'm 36 and have nearly US$5M, so if I delay till I'm 86 that's 1.1^50=117x more utilons. One might counterargue that a utilon provided today can compound itself and provide more utilons in the future, but that's less clear than financial compounding and it's going to heavily depend on the type of charity. Also knowledge about how to spend the money will be better in the future. The plan is to wait until I'm at least 80, then give away 10% of my net worth per year.

* (source: worldbank https://data.worldbank.org/indicator/NY.GDP.PCAP.CD

From 2010 to 2020, global GDP per capita in current US$ increased from 9558 to only 10909. That's an annualized growth of 1.3%/year. If we blame covid and cherrypick 2019 instead it's still only 1.97% from 2010-2019. Seems like there's a decent chance it will go negative in the future due to the lack of a demographic transition in some poor countries, but I'm not confident enough about it to factor that in to the model. There should be prediction markets for global GDP per capita in 2100 conditional on adopting policy X.)

(Have the EA people done empirical research on the utilon inflation rate and the utilon-compounding rate for various kinds of charity?)

Expand full comment

Is this a general argument against high-risk high-potential-return giving (e.g. to existential risk charities) or do you think what Scott's doing here is clearly significantly worse than that?

Expand full comment

I'm skeptical that this is better. If it is better then why give the 10% to normal effective charities was my question. Scott responds above.

Expand full comment

For projects that have the potential to improve the lives of millions of people, the expected utility will often be much higher than a hundred lives saved, even if the improvement per person is really small.

Expand full comment

Then Scott's fund will be more efficient than the effective charities and he should put his money toward this fully rather than giving 10% to effective charities. Also, other altruists should do the same. Do you recommend this?

Expand full comment

I'd suggest incorporating a grant making organization to get around the taxes.

Expand full comment
author

Probably I should do this before next time, but I understand it's pretty hard and might cost more than I save.

Expand full comment

You just incorporate and then register with the IRS. It costs a few hundred dollars usually. You would be able to deduct what you give out from your taxes and to get out of gift tax in many cases. Presuming you don't hit the percentage of your income limit and the average grant is $25k you'd be saving about $90,000.

Expand full comment
founding

So I am not a tax expert and this is not tax advice and all that, but my understanding is that you won't have to pay gift taxes anyway, unless you end up making *way* more gifts than we're currently talking about - google "lifetime gift limit" and the first result's precis says "Most taxpayers won't ever pay gift tax because the IRS allows you to gift up to $11.7 million over your lifetime without having to pay gift tax."

Expand full comment

That's what I thought, too. There's also a $15,000 per year exemption before the lifetime exemption kicks in, which I think is per recipient as well. So you should be able to give you $250k grant pool as 17+ grants of $15k or less without cutting into your lifetime exemption.

Expand full comment

>There's also a $15,000 per year exemption before the lifetime exemption kicks in

That's right. Here's what the IRS says:

>How many annual exclusions are available?

>The annual exclusion applies to gifts to each donee. In other words, if you give each of your children $11,000 in 2002-2005, $12,000 in 2006-2008, $13,000 in 2009-2012 and $14,000 on or after January 1, 2013, the annual exclusion applies to each gift. The annual exclusion for 2014, 2015, 2016 and 2017 is $14,000. For 2018, 2019, 2020 and 2021, the annual exclusion is $15,000.

https://www.irs.gov/businesses/small-businesses-self-employed/frequently-asked-questions-on-gift-taxes

Expand full comment
founding

I *think* the relevant form was the 709, and that other than that you just need to sign some letter affirming that it is truly a gift, i.e. you don't expect anything in return? (this latter might have been to give to the recipient, so that they can prove on *their* taxes that this was a gift and not income). But again this is not my field so double check.

Expand full comment

Taxes on this sort of thing are not inherently bad.

Expand full comment

What's inherent badness got to do with this? It's just a fund allocation optimization thing.

Expand full comment

The government set up rules specifically to give things like this a tax exemption. I'm not sure why paying taxes the government doesn't want you to is a moral issue.

Expand full comment

A donor-advised fund is less onerous: https://www.nptrust.org/what-is-a-donor-advised-fund/

Expand full comment
founding

I don't think a DAF would work here - aren't they only allowed to make grants to officially approved charities? whereas Scott wants to make grants to random individuals.

Expand full comment

Right, these are not charitable donations, in the IRS sense.

Expand full comment

Well done, a very worthy endeavor.

"Some effective altruist organizations suggest that people with large but not billionaire-level amounts of money might want to try acting as charity “angel investors”. They argue that there are enough government agencies and billionaires to fund the biggest and most obvious legible high-impact opportunities."

I have often felt the urge to write up my belief that marginal utility declines faster for charitable giving than for other types of expenditures, but then I feel tired and lie down.

Expand full comment

Can I bribe you to write it up briefly? $50 to your paypal or whatever, or I'll sub to your substack for at least a year at the annual rate.

Expand full comment

But is it a worthy endeavor from a Marxist perspective?

Expand full comment

The Marxist perspective is that first things will get *really* bad, and then the Revolution comes and everything will be great forever, isn't it?

From such perspective, the only worthy endeavors would be the ones that make world a significantly worse place...

Expand full comment

Most Marxists are not "accelerationists", which is the position you are hinting towards.

Marxists are aware of the limits of bourgeois charity, so I'm surprised that Freddie's response was a simple "Well done, a very worthy endeavor". Again, as a Marxist, what exactly is "worthy" here?

Expand full comment

Would you say that this is still true for the particular top charities recommended by (say) GiveWell, who already try to account for decreasing marginal utility in their evaluations via their "room for more funding" consideration?

I think it's a fair question because "normal EAs" (like me, honestly) mostly just default to directing monthly donations to whatever orgs like GW recommend (maximally lazily, I just set up a recurring auto-payment to their Maximum Impact Fund), instead of trying to pick charities ourselves (which I agree would, on average, be affected by decreasing marginal utility).

Expand full comment

Can multiple people apply as a team for the grant? I think I have a few grant-worthy ideas but no time to work on them. My main project (in vitro gametogenesis) is already well-funded.

Expand full comment
author

You're doing in vitro gametogenesis? I...know a lot of people who would be willing to throw basically unlimited money at speeding that up a small amount for...uh...reasons...so let me know if there's anything do-able in this space.

But yes, of course you can apply as a team.

Expand full comment

Interesting, I'll email you to connect about the IVG stuff.

Expand full comment

Hey hey hey hey hey

ZNF674-AS1 - aldolases are deeply conserved/expressed first/oncogenic for a reason/please don't be like the rest of stem cell land where everyone thinks any carbon substrate will do (I know you're not stem cell land, but just in case there's a similar cavalier attitude towards biological reality)

(Literally every month a woman's uterus makes the enzyme to turn glucose into fructose to support any potential zygote until implantation and then once established - fetus starts secreting the same enzyme to turn any glucose in the womb into fructose for fetal metabolism. Basically, correct carbon substrates are way cheaper than fancy recombinant growth factors AND actually replicative of in vivo biology/you stand a chance of getting the tissue specific markers/enzymes you need)

Sorry- subsequent mitochondrial metabolites are involved in developmental signaling. Idk why the field sticks with protocols from before gay people had civil rights, but that's why I'm going to grad school.

Expand full comment

Re this thread from before:

https://astralcodexten.substack.com/p/model-city-monday-11821/comment/3555263

It’s not much, but if you submit a grant proposal that Scott accepts, and it has to do with anything related to what I brought up in the linked comment chain , or any research that could either validate or falsify practical predictions of various Georgist policies, I’ll contribute $1,000 of my book review winnings to your grant.

As well as my attention, personal network of wonks, and anything else I can do for you.

Expand full comment

I haven't done a thorough cost estimation, but I think that using this treatment (https://www.nature.com/articles/s41467-019-10366-y) against certain transposons (https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging) has a good shot at _reversing_ aging.

Expand full comment

Where, if at all, would this treatment be within this list of most promising anti-aging strategies? https://www.lesswrong.com/posts/RcifQCKkRc9XTjxC2/anti-aging-state-of-the-art#Part_V__Most_promising_anti_aging_strategies_

Expand full comment

Assuming it works, it would take first place, as it would address the underlying cause of aging, as opposed to its symptoms.

Expand full comment

The LASER ART acronym really makes me want to punch the person responsible.

Expand full comment

Awesome news. So much of basic work on third rail issues that have great potential to advance our understanding of the human condition are impossible in today's ideological climate.

Expand full comment

You love to see this kind of stuff. Excited to see where this goes!

Expand full comment

This is fantastic.

I've seen this sort of thing done at a lower level - people trying to do things who need money for it, and people gathering to review and fund things. I love this sort of model.

Good luck!

Expand full comment

Surprised you have money to burn, given that you had to quit your daily job due to mob.

Expand full comment

There was never any mob and Siskind was just complaining that the New York Times had the temerity to print his name in an article.

Expand full comment

Hey, marxbro, what's *your* real name? I mean, if we're talking about having the temerity to print one's name where anyone can read it, then you must mean there are no bad consequences, so why aren't you all letting us know who you really are?

Expand full comment

My real name is John Smith. People attempt to claim that NYT published Scott's name without his permission or something like that, but Scott had already put his real name out there himself. He's just angry that the New York Times is using its freedom of speech to post something minimally critical of him.

Expand full comment

Marxbro / John Smith is wrong -- Scott explains in https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-safety-by-revealing-my-real-name-so-i-am-deleting-the-blog/ (the quote below has links in the original post):

"I have a lot of reasons for staying pseudonymous. First, I’m a psychiatrist, and psychiatrists are kind of obsessive about preventing their patients from knowing anything about who they are outside of work. You can read more about this in this Scientific American article – and remember that the last psychiatrist blogger to get doxxed abandoned his blog too. I am not one of the big sticklers on this, but I’m more of a stickler than “let the New York Times tell my patients where they can find my personal blog”. I think it’s plausible that if I became a national news figure under my real name, my patients – who run the gamut from far-left to far-right – wouldn’t be able to engage with me in a normal therapeutic way. I also worry that my clinic would decide I am more of a liability than an asset and let me go, which would leave hundreds of patients in a dangerous situation as we tried to transition their care.

The second reason is more prosaic: some people want to kill me or ruin my life, and I would prefer not to make it too easy. I’ve received various death threats. I had someone on an anti-psychiatry subreddit put out a bounty for any information that could take me down (the mods deleted the post quickly, which I am grateful for). I’ve had dissatisfied blog readers call my work pretending to be dissatisfied patients in order to get me fired. And I recently learned that someone on SSC got SWATted in a way that they link to using their real name on the blog. I live with ten housemates including a three-year-old and an infant, and I would prefer this not happen to me or to them. Although I realize I accept some risk of this just by writing a blog with imperfect anonymity, getting doxxed on national news would take it to another level.

When I expressed these fears to the reporter, he said that it was New York Times policy to include real names, and he couldn’t change that."

Expand full comment

He says right there he's not a "big stickler" about remaining anonymous. That's obvious given the fact that he has never really tried to remain anonymous and indeed attached his real name to his blog a number of times. Why would the New York Times refrain from printing true information?

As for the supposed "death threats" and such, Scott never substantiated those or proved that they were any more credible than the run-of-the-mill internet death-threats that almost everyone receives.

Expand full comment

Then why don't you give us your real name, because I think you are lying about being "John Smith", a pseudonym so thread-bare it's been the butt of jokes since the 19th century.

You're very careful not to make yourself available in the same way, and to cry about being insulted etc. on this website, while not extending the same courtesy to others. If it's okay for the NYT to say Scott is a fascist and racist, because that's just "run-of-the-mill internet death threats that almost everyone receives", then it's also okay for me to convict you of blood on your hands as the apologist for Marxism, a philosophy which is smeared with the suffering and death of tens of millions of people from Russia to Asia. You are every bit as culpable as Pol Pot and the Red Guard for any deaths, and claiming you weren't alive then or weren't physically present means nothing: you uphold and defend and propagate the same philosophy as drove them, and to the same ends, and you would do the same to your enemies if you could.

This is why no-one on here treats you seriously, it's because you're dressing up in your Jack The Ripper suit and standing there streaming blood while claiming to be the one person who knows true method to peace and prosperity.

Expand full comment

hhhhmm is there a study about probability distribution of death-threats?

Expand full comment

"I think it’s plausible that if I became a national news figure under my real name, my patients – who run the gamut from far-left to far-right – wouldn’t be able to engage with me in a normal therapeutic way."

If that were to happen I don't see the problem at all. That's consumers making informed decisions about what they choose to buy in the marketplace. Surely that's something Scott, as a liberal, would usually champion.

Expand full comment

If this is known in advance, why not. Once there are hundreds of patients, things are more complicated.

Expand full comment

Several people use my real name. However, if I ask a stranger not to use my name as I do not give them permission to do so, and they go ahead and do it anyway, then they are in the wrong.

The NYT was not correct to do what they did, and "But all your family and friends and co-workers already know your name" is not good enough. They were also (or Cade Metz was also) fast enough to start complaining of the bad response they got, and they certainly didn't think "But your editor's name is freely available on your website" was a good enough reason.

You have a bee in your bonnet about Scott. That doesn't excuse you being discourteous. Nobody on here is convinced by Marxism, we all have our several reasons for that, and you are not persuading anybody by "read Marx" "I have read Marx" "you read him wrong".

Expand full comment

I don't really know what you mean by this whole concept of "if I ask a stranger not to use my name as I do not give them permission to do so, and they go ahead and do it anyway, then they are in the wrong". It seems basically unworkable when we're talking about media and journalism. If any person who didn't want to have their name in the press could simply send a request to journalists to have that info expunged, then newspapers would be very confusing reading indeed.

"That doesn't excuse you being discourteous"

I'm not discourteous and I'm not sure I've ever sworn on this comments section.

If you are not persuaded by my well-cited posts on Marx you can ask follow-up questions so I can help clear things up for you.

Expand full comment

"They were also (or Cade Metz was also) fast enough to start complaining of the bad response they got, and they certainly didn't think "But your editor's name is freely available on your website" was a good enough reason."

It seems you're admitting that "the mob", if there was any mob, was actually directed towards the NYT. Were any of Scott's co-workers sent a comparable amount of emails over this issue?

Expand full comment

LOL! John Smith, the world's most anonymous real name!

Expand full comment

Punchline: Scott is at present one of the most individually successful writing journalists in the world, surely top 100. (And more power to him!)

Substack is doing a *very* effective job monetizing the niche of high-profile individual writers, and per the current categorization Scott's the #1 writer in Science. See his comments here, and there are plenty of other analyses around the web that go a long way towards explaining why Substack can afford to headhunt so much high-profile talent: https://astralcodexten.substack.com/p/adding-my-data-point-to-the-discussion

Expand full comment

"Guy whose main source of income is blogging, which lets him pursue his passion of providing low-cost psychiatric care for uninsured patients" is a pretty heckin remarkable Type of Guy and I'm glad he exists.

Expand full comment

His substack says tens of thousands of subscribers. Assuming most are $100/year -- that's more than psychiatrist salary on just subscriptions alone.

Expand full comment

If this doesn't have a name already, I propose "Peterson effect" (named by Jordan B Peterson). The idea is that if you are producing a lot of valuable content, it can subvert the efforts of the type of mob that tries to ruin you life by making you lose your job by spreading outrage.

On one hand, yes, the successfully spread outrage may cost you your current job. On the other hand, as a part of spreading the outrage, many people who never heard about you before will now learn your name, some of them will find the content you provide, and some of them will like it so much that this will create an alternative income stream for you, perhaps better than the one you lost.

The important part is that the value of the content is beyond what produced the controversy, so the alternative income stays long after the original controversy is forgotten.

Expand full comment

Maybe you could provide a few examples of the kinds of projects you'd be especially excited about?

Expand full comment

Once you sort out the most tax-efficient way to do this, it would be fantastic if you could share — I’ve wondered how to do this, too, and (speaking as a lawyer who sometimes dips shallowly into tax law) I’m not sure any of the explanations in the comments so far are totally accurate

Expand full comment

Couple of thousand bucks to buy a chunk of hafnium, with ~27% Hf178, and a dental X-ray machine to generate the nuclear isomer and trigger gamma-ray release.

Expand full comment

Stimulated emission of Hf178m2 never replicated, and was just as implausible as cold fusion.

https://en.wikipedia.org/wiki/Hafnium_controversy

Expand full comment

Muon-catalyzed fusion works fine, it's just not net-energy positive with current muon sources.

Expand full comment

I mean the fake kind of cold fusion that didn't use muons and never replicated.

Expand full comment

I think you may need to add several zeros to the price for that kind of isotopic enrichment.

Expand full comment

I'm a huge fan, Scott, but I sure hate it when people say they want to make the world a better place. We can't even agree on what would be better.

The best joke ever from the Mike Judge show "Silicon Valley" is the tech CEO speaking to his employees: "I don't want to be a part of a world in which someone else is making the world a better place than we are."

We are all children of Adam Smith, who argued very effectively that our best interest is to have the butcher, the baker and the candlestick maker work in their own best interests.

So, sorry but when someone says they want to make the word a better place I think you are Stalin.

Expand full comment
author

If you start a grants program for people who want to make the world a worse place, I'll link you.

Expand full comment

Cool! Thanks.

Expand full comment

I mean, didn't you write how the tails coming apart is a metaphor for life? I think there's nothing implausible about being in favor of everyone making the world moderately better, but not extremely better. Since if someone can make the world extremely better from their perspective, that probably involves making it worse for a lot of people. I think that's a not unreasonable intuition, but as a counterpoint, $250k probably can't make the world all that much better anyways.

Expand full comment

I'm interested in what the people who want to make the world worse would do. Seems like the best thing we have are good institutions, so undermining them would be the way to make the world worse. But how would we do that? I mean faster than we are already?

Expand full comment

I think Peter Thiel already has that covered

Expand full comment

Before someone writes a thousand word rebuttal: that is mostly a joke, please don't link me to all the actual good companies the founders fund is working with

Expand full comment

Nah, it was a good joke. I laughed.

Expand full comment

I would support the Mephisto Grant For Willing Bad But Doing Good.

Theory: due to unintended consequences, it's pretty much a crapshoot whether a particular intervention makes the world a better or worse place. The best you can do is to try things at random and see what happens. But while the space of good sounding interventions is crowded, the space of evil-sounding interventions is relatively unexplored. The Mephisto Grant will support evil-sounding (but not illegal) projects designed to make the world a worse place, on the offchance that one of them might actually do some good.

Expand full comment
founding

This is very cool. But it'd be even cooler if it were easier to help you out with funding. I would love to donate to your capital pool here, but I think if I were to do that either, you'd have to pay income tax on the money, or i'd have to pay gift tax on it. This basically means one way or the other, half the money immediately goes to the government.

It seems extremely worthwhile to form a non-profit of some kind to prevent this. I think you could easily raise many millions of dollars for a project like this. I'm happy to pay the legal expenses to get it done.

Expand full comment

I think good projects he does not fund, will go on to the blog and you can direct fund from there as one possibility. If you like the model, you can also consider donating to Emergent Ventures, although there you rely on Tyler Cowen's judgement.

Expand full comment

Gift tax has a lifetime exclusion of ~12m. You or Scott won't be paying any gift taxes until they've given over ~12m.

Expand full comment

Couple of questions:

1. Is there any limitation on what the money can be spend on? For example, some fellowships only allow for their funds to be spended on direct costs of the research (e.g. paying participants) but not as a salary for the researchers.

2. Do you have already an idea on how to check the progress of the projects, does one have to write some kind of report every other month? Do you have any other formal prerequisites like open data or preregistration?

3. Can one submit multiple proposals and if so, should you fill out the form just once or for every submission separately?

4. Are there any restrictions concerning the timing? For example if I still need half a year until I am finished with my phd, is it okay to wait until afterwards?

5. How high is the bar for "direct applicability"? If one for example does basic psychological research on happiness and wellbeing, but the research is not directly aimed at applying this to the real world and actually make people feel better, would that still have a chance on being picked?

Expand full comment

Wonderful initiative!

1 question: It's unclear if non-US applicants are permited?

Expand full comment

He specifically mentions being from a developing country, so seems like it's okay (might be more complex from a tax perspective though)

Expand full comment

Ah, I'm a sloppy reader... Thanks!

Expand full comment

Tossing this one out here in the hopes that someone else can pick it up....

Most cancers start as mutations in known oncogenes. They are genetically distinct from surrounding tissue. CRISPRs are good at acting when and only when they see a particular sequence. One could use the famous CRISPR-cas9 to transfect exclusively tumor cells with some sort of cytotoxin (an aggressive protease perhaps -- then once the cell is dead, the protease will destroy itself before the membrane lyses). Or CRISPR-cas12 cuts out the middleman and becomes a ravenous nuclease when it sees the trigger-sequence. Delivering these agents could probably be done with standard techniques. ISTR one of those is the envelope of a dna virus which cannot pass myelin -- that would be a useful extra safety feature for non-brain cancers.

How would one go about developing this?

First, take some convenient mammilian cells, transfect them with different colors of florescence, mix them in vitro, then kill one color with the CRISPR. This should require a minimum of sequencing, as the florescent proteins are known, and you just need to find a subsequence not present in the host genome. You can measure effectiveness and false-positive-rate optically, which should also be cheap. This lets you iterate freely on delivery mechanism.

Once that looks good, move to the in vivo version. Use some not-spreading-very-far vector to put florescent polka dots on mice, then remove them with a system-wide CRISPR.

Keep an eye out for behavioral signs of pain. The dying cells might leak enough ATP into the interstitial fluid to trigger an endovanniloid cascade. If so, better to learn sooner than later. Lidocaine may be all that's needed here, but maybe cortisone if there are signs of dangerous local inflammation.

If this part works, move on to cancers. Probably best to use skin cancer, since biopsies will be easier. This is where the sequencing may get expensive, since you need each mouse healthy and tumor, and ideally many cells from the tumor as single-cell-sequencing so that further mutations don't lead you astray.

Have three groups of test animals: no cancer, just cancer, and cancer plus cure. Then make all the comparisons in total lifespan and cancer biomarkers (pick a sufficiently short-lived species that total lifespan is practical to observe. The groups shouldn't need to be very big: just big enough that the proposition "cancer shortens lifespan" will show up unambiguously.

I think this could be done by a grad student with good wet-lab skills on the sort of budget Scott is offering. I have no wet lab skills, so I'm hoping someone who does reads this.

Beyond the cure mice phase, I see two options. One is to proceed to human trials as is classic. This will require more money, but hopefully at that point other attention will arrive. Such a study would probably compare "normal treatment" to "normal treatment plus this", and would keep the clinical oncologists blinded, but warn them all to check signs and adjust dosages frequently.

The other is to establish a commercial veterinary cancer-cure center and let people wonder why we can cure dogs but not people, and maybe whether a human could be slipped in by officially being an orangutan.

Expand full comment

Doing this without official sanction looks an awful lot like animal abuse.

The only vertebrate you can really biohack is yourself.

Expand full comment

If you have an academic affiliation, or half of one, you can probably get mouse approval pretty easily. If not, the in vitro steps are still a good start.

Alternatively, move to a rural state and insinuate a plan to eat the mice when you're done. Then sue anyone who accuses you of animal cruelty.

Expand full comment

I was thinking "gee, he said 'animal abuse', didn't say anything about legality."

Then I noticed he defined it as not-looking-like-abuse if there's "official sanction".

Expand full comment

This idea occured long ago to the regular research establishment:

https://www.science.org/doi/10.1126/sciadv.abc9450

Expand full comment

I am super excited about the general project and would love to apply, but am very uncreative when it comes to generating new and fruitful research ideas.

My background: I have a Masters in psychology with a focus on clinical neuroscience. I received 5 years of training in CBT and am about to finish my phd which is about basic research in predictive coding and its conceptions about what emotions are. I know my stuff around statistics and good methodology. If provided by the grant, I could buy testing time at all the machinery usually needed for neuropsychological research (fMRI, EEG, TMS, EDA, etc.). Through the institution I work for at the moment I could get access to patients with all kinds of mental illnesses to recruit them as participants for experiments. I don’t have any medical degree, which is why any research with medications would unfortunately be off the table.

Expand full comment

This is an excellent idea. I run a much small microgrants programme (only 1K) based on the same concept basically borrowed from Tyler Cowen as well. www.ThenDoBetter.com/grants

Expand full comment
author

Any advice for me?

Expand full comment

My thoughts on this for you are… you need to think if you favour low probability but high reward ideas, or eg medium probability and medium reward idea (or if you are indifferent). You also need to consider how much you weight other access to capital. My grants tilt personal, as I am interested in giving capital to people who struggle to get capital from orthodox sources. I also weight fairly highly the chance that the person can complete the project in some way, even if the outcome of the project has a very low chance of success. A completed negative result is good.

The other surprising aspect, I found is that a number of applications I thought were good, but not suitable for me, that a few words of encourage and feedback set them off on a very positive path. This might have only been 5 to 10% of applications but that soft feedback was valuable to them.

If an application is a NO for you, don’t waste time going back and forth or questioning your decision too much. Just move on. This is partly a question of your judgement and luck (presuming you are funding low probability ideas) and speed is probably more valuable.

I would not disregard that this is an investment in the person or team as much as, or even more so, than the idea on occasion. At least it is for me. In that, it is part a talent bet and a bet that this talent with the access to the right capital will pay off, potentially big. EA can not reach and does not reach such talent, IMO.

As you rightly assess one of your largest assets is your network and following, so called social and relationship capital, and I would utilise that.

Expand full comment

Very cool! I think it would be better if 'How much money do you need?' would be a long-form answer field. As it is right now, you can't see the formatting (or most of the text).

Expand full comment

Will there be another one of these in the future?

Expand full comment

Yeah this is really exciting, especially given the current problem EA has of trying to find highly scaleable projects. I'd be excited if people applied with projects that could use this money as a test something that could potentially absorb like millions-10s of millions of dollars.

Expand full comment

Okay, speaking of Mad Scientist Malarkey, I've just read this new post over on The Renaissance Mathematicus and "Wow" is my first reaction.

Funding research on brain transplants during the Cold War - https://thonyc.wordpress.com/2021/11/10/would-you-like-a-new-body-for-that-brain-sir/

"Brain transplants are the subject of science fiction and Gothic horror, right? One of the most famous Gothic horror stories, Mary Shelley’s Frankenstein; or, The Modern Prometheus features a brain transplant, of which much is made in the various film versions. But in real life, a fantasy not a reality, or? Wrong, the American neurosurgeon Robert White (1926–2010) devoted most of his working life to the dream of transplanting a human brain, experimenting, and working towards fulfilment of this dream. I’m a voracious reader consuming, particularly in my youth, vast amounts of scientific and related literature, but I had never come across the work of Robert White, which took place during my lifetime. Thanks to Brandy Schillace, this lacuna in my knowledge has been more than filled, through her fascinating and disturbing book 'Mr. Humble and Dr. Butcher: A Monkey’s Head, the Pope’s Neuroscientist, and the Quest to transplant the Soul', which tells in great detail the story of Robert White’s dream and his attempts to fulfil it.

The title is of course a play on the title of Robert Louis Stevenson’s notorious Gothic novella Strange Case of Dr Jekyll and Mr Hyde, the story of a medically induced split personality, with a good persona and an evil one. Here, Mr Humble refers to the neurosurgeon Bob White, deeply religious, Catholic family father and brain surgeon, who always engaged 150% for his patients. A saint of a man, who everybody looked up to and admired.

Dr. Butcher refers to the research scientist Dr White, who carried out a, at times truly brutal, programme of animal experimentation on the way to his ultimate goal, the transplantation of a human brain."

Expand full comment

Given that Scott seems to be doing well enough financially, I'd feel better about renewing my subscription if some of it was going to grants like this.

Expand full comment

Somewhere there is an ACX post which lists the impact of various educational interventions (eg. class sizes, tutoring) but damned if I can find it. If anyone can point me in the right direction I would be grateful!

Expand full comment

If you set up a way to give small donations into the pool, I'd be interested in donating, and I think others would too. I know there are other similar charities, but I'm kind of partial to Scott's judgement and network.

Expand full comment

Two charities you should consider funding are the Center on Long-Term Risk and the Center for Reducing Suffering. These organizations are focused on reducing S-risks, or risks of astronomical suffering.

https://longtermrisk.org/

https://centerforreducingsuffering.org/

https://reducing-suffering.org/donation-recommendations/

https://www.youtube.com/watch?v=jiZxEJcFExc

Expand full comment

awesome :) this is very generous scott, thank you for doing this

Expand full comment

I saw a comment on LessWrong recently asking "what if we just gave Terrence Tao (for example) ten million dollars to work on the AI Alignment problem for a year". It is a funny idea, but also seems worth seriously considering. I realize $10M >>> $250K, and I personally am no where near qualified to try to arrange some sort of actual project like this, but I figured this is worth bringing up here when weird high upside risk ideas that cost money are being discussed.

Expand full comment

If you have to pay the smartest people in the world to work on AI Alignment, maybe AI alignment is just not that important.

Expand full comment

It's also possible that it might be important, but not actually present any surface for attack at the moment.

As I understand it, people have spent twenty years thinking about "AI Alignment" and not actually managed to chip away at it in any meaningful way.

Expand full comment

I don't entirely disagree with the general point that "maybe what the smartest people in the world chose what to work on in a smart manner". But you can't just conclude whatever they aren't doing is not that important, I don't think. That kind of reasoning seems to imply that solving Navier-Stokes is one of the Most Important Things currently, which seems pretty unreasonable to me.

Expand full comment