251 Comments
Comment deleted
Nov 14, 2021
Comment deleted
Expand full comment

Paula I don't think the field of posting spammy links to nudes is a neglected cause area.

Expand full comment
Comment deleted
Nov 13, 2021
Comment deleted
Expand full comment
Comment deleted
Nov 13, 2021
Comment deleted
Expand full comment

This seems insulting for no reason. I fear that the SSC/ACX comments are going downhill, likely because of decreased political diversity.

Expand full comment

While I agree that marxbro is generally making the comment section worse, in the context of this thread the insulting joke was neither necessary nor kind (and so we should strive to do better).

Expand full comment

I don't make the comments sections worse, I'm one of the few commenters here who fact-checks Scott when he writes about left-wing topics.

Expand full comment

Mea culpa: it is true that my joke was neither necessary nor kind. I do think there was a bit of truth to it, but 1 out of 3 is not enough: I’ll delete it

Expand full comment

Props for recognizing it as as such (and I agree on the truthiness of the statement)

Expand full comment

How was the statement true?

Expand full comment

I believe that you repetitively bring up the same small number of points, in a generally aggressive way.

To look at the comments on this post:

1) 'But is it a worthy endeavor from a Marxist perspective?'

If you have an opinion on the worthiness of this endeavor from a Marxist perspective, it seems like it would be much more interesting if you explained why the endeavor was good/bad. Just mentioning 'Marx?' doesn't seem fruitful.

2) 'There was never any mob and Siskind was just complaining that the New York Times had the temerity to print his name in an article.'

Again, this is an argument you seem to bring up whenever it's vaguely relevant (or when its not). I think Scott's additional arguments against the piece as deliberately mischaracterizing him seem at the very least plausible, and implying that there's no other reason that Scott could object seems like you're just trying to start shit. To look at your comment from the gate test:

Is it true? probably not

Is it kind? no

Is it necessary? probably not

To go back further, 'Secrets of the Great Families' has two civil and relevant comments by you. Props, no objection from me.

In 'Epistemic Minor Leagues', you have a semantic argument that while Marx believes Communism is the inevitable outcome of Capitalism, this doesn't amount to historical determinism.

After reading this more, I'll quote John Wittle's excellent comment in full, and then never reply to you again:

"I would say that people on SSC have spent more time arguing over whether Marxbro is a troll than they have on any other individual topic. The consensus is that he is not. he has accounts on several other sites where he does the same thing he does here: defends marx unconditionally.

Generally he responds to things that have very little to do with marx, and wrenches the conversation in the direction of marx, and then continues pushing forever. His favorite topic in these comments is lambasting Scott for being critical of Marxism without having considered xyz arguments. He comments this in some form on nearly every single post.

And it always leads to a very long argument that most people are extremely tired of having. Both the object-level "arguing with marxbro" and the meta-level "arguing about what Scott should do about marxbro to keep him from ruining the comments section any longer"

"

Expand full comment

This is still an implicit insult and more ‘piling on’ behavior.

Think about it for a moment. I’m pretty sure you’ll understand what I’m saying.

I wish I could say it was childish but I see it all too often in adults.

Respectful consideration of good faith arguments even if you happen to disagree is a major part of the appeal of ACX.

This ganging up on the ‘other’ no matter how obliquely done is screwed up.

Expand full comment

You’re assuming “good faith arguments” which deserve “respectful consideration.” If I were interested in continuing this discussion, I would give reasons why and examples of how those assumptions are not accurate in this case. But I’m not, so I won’t

Expand full comment

At some point, it seems reasonable to no longer believe arguments are being made in good faith. For example, if someone replied to literally every thread with a statement semantically identical to "But what about climate change?" and then made tangential follow-ups when questioned, I'd be forced to conclude they were arguing in poor faith (or otherwise not worth the time). The same would hold true for someone who did the same for 10% of threads, 1% of threads, and most likely .01% of threads. I believe that marxbro is in a similar position.

I also think we're quibbling about an awfully small difference in implication. Your comment that the insult was neither necessary nor kind has heavy Gricean implicature that the insult was true, so I would prefer a slightly less sanctimonious admonishment about avoiding oblique insults.

Expand full comment

Paula's grant proposal fits the conditions specified by Scott surprisingly well.

Online porn addresses global poverty (for poor people, finding online porn is cheaper that paying a sex worker) and global health challenges (you can't get AIDS from watching porn). Porn is often a driver for modern technology (source: https://www.youtube.com/watch?v=zBDCq6Q8k2E ). It helps people stay at home during pandemics...

Expand full comment

this made me laugh :)

Expand full comment
Comment deleted
Nov 12, 2021
Comment deleted
Expand full comment

This + lobbying for more MAOI use would be actually pretty impactful

Expand full comment
Comment deleted
Nov 12, 2021
Comment deleted
Expand full comment

Thanks for the kind words. I don't have any plans to create a network; maybe I should figure that out but I can't guarantee it. Does anyone know how this is usually done?

I suppose if you wanted to game the system you could apply for a very small grant to improve some aspect of your operations, and check the box to be included in ACX++, and then I would have to advertise you!

Expand full comment
Comment deleted
Nov 12, 2021
Comment deleted
Expand full comment

I want to signal boost this - particularly for start ups or individuals/small groups operating outside of accelerators/universities, funders play a much bigger role than just money - they can provide connections, sounding boards, boring-but-essential admin tips, etc.

Expand full comment

The network already exists I'm typing stuff into part of it right now. If the ACX-funded subnetwork wants to start their own [whatever is recommended as an alternative to

a discord server] then they can. And i'd not expect this endeavour to gain the institutional-nature overnight.

Expand full comment

At minimum, setting up a Discord or Slack for all the funded projects and seeing what people use it for seems like a straightforward and good step. YCombinator may be the premier example here, although they have a fairly complex internally developed mashup of LinkedIn and Facebook for the job (known as BookFace)

Expand full comment
Comment deleted
Nov 12, 2021
Comment deleted
Expand full comment

FAQ above says applications are closed in 2 weeks.

Expand full comment

Might be good to put a date on that, just for clarity. Especially for people who don't see this post on its first day.

Expand full comment
Comment deleted
Nov 12, 2021
Comment deleted
Expand full comment

I think a tiny, tiny fraction of EA spending on moonshots that have nonzero probability to dramatically increase QALYs, or actually save lives, makes this a really good, in fact underrated, idea.

That's a generic/a priori argument, and I understand the demand for providing those probabilities rigorously when the opportunity cost is lives saved. But that's the thing: venture capitalists, philanthropies, smart people and capitalism generally have all been attempting to predict the impact of moonshot ventures for a long time and it's *really, really hard* to predict what will work and what won't, so at some point one has to make judgment call and say, time for some moonshots.

Expand full comment

Good question.

One answer could be that since rationally I should fill up the most expected-effective category before moving on to anything else, and we're talking about categories much too big for any individual to ever fill up, this proves that I'm not donating fully rationally and instead trying to satisfy psychological needs (spending some money to feel risk-seeking and innovative, then other money to feel certain that I'm doing at least some good). I can't argue with this, but I think there are compelling arguments for either being the higher-utility one, and although they're probably not *exactly* equally compelling, it's enough that the value of satisfying my psychological needs is higher (to me) than the expected value gain of doing whichever one turns out to be better.

Another answer is that since I'm not especially rich but I am especially a public figure, the value of almost everything I do is as a role model. I'm role-modeling pledging 10% of my money to effective charities (which I think more people should do), and I'm also exploiting my public-figure status to get proposals for exciting new ideas that richer people than I am can fund. These don't trade off against each other the same way that money does.

(maybe you could counterargue that I should role model being rational and not putting my psychological needs above relatively-small-but-absolutely-large gains in expected utility, but I'm not sure people would listen to that message. I wouldn't!)

To sell NFTs, I would actively have to ask for them or advertise them, and I think the reputational damage this would do is more costly than the money I would get. But if anyone wants to unsolicitedly buy an NFT from me, sure, whatever, send me an email as long as you're willing to pay enough for it to be worth my time (I am technically unsophisticated, you would have to walk me through the process, and it would take a while).

Expand full comment

Yet another answer is that since you are already (separately from this, if I understand correctly) donating 10% to charity, this doesn't actually funge against saving lives but against your metaphorical beer fund, and therefore you are completely off the hook for that particular question as per one of your old SSC posts (this one, I think: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/).

Expand full comment

"maybe you could counterargue that I should role model being rational and not putting my psychological needs above relatively-small-but-absolutely-large gains in expected utility, but I'm not sure people would listen to that message. I wouldn't!"

You should listen to that message tho cuz it is the correct one in this situation.

Expand full comment

Your argument isn't very convincing. I think the problem comes from conflating "rational" -> maximizes utility, and "rational" -> uncompromised by human emotions. Both senses of rational are ideal rather than rational, but even in the mixed and muddied real world the two senses output very different results. It seems like Scott is trying to be rational in the sense of maximizing his utility by meeting his emotional needs, while you are trying to be rational by not letting emotional needs get in the way of clear numerical calculations. Anecdotally, I cannot recommend utilitarians try to discount/ignore their emotional needs. It works no better than trying to discount/ignore physical needs.

(Of course if saving as many lives as you can is your big emotional need, then go right ahead!)

Expand full comment

Honestly, I think this idea is possibly the maximal EA for Scott, regardless of whether it also happens to serve his emotional needs.

Ord's thesis in The Precipice is, after all, that if you value humanity's future at some non-hugely-discounted rate then X-risk swamps everything. The chance of something coming out of here that substantially ameliorates an X-risk is low, but given that X-risk charities are mostly doing similar things to this anyway and ACX has more penetration than most of them it's probably the biggest splash in that pool Scott can make.

Expand full comment

> rationally I should fill up the most expected-effective category before moving on to anything else

This doesn't sound right to me.

It's not true, any more than a rational investor seeking to maximise his returns should throw all his money into whatever he thinks will have the highest return. That's a silly way to invest; instead a rational investor should acknowledge his own ignorance about which investments will pay off, and have a portfolio of different investments in different areas with different levels of risk.

(Project idea: apply Modern Portfolio Theory to Effective Altruism projects.)

Expand full comment

Also, consider comparative advantage. Any semi-rich jerk can give $250K to random charity, but Scott is uniquely well placed to run this kind of project, because he has a lot of smart readers and all that.

If someone else (e.g. me) tried to run the same sort of grant project it would go much worse, because I'd probably have to resort to advertising on lamp posts in my local area or something and I'd attract a much lower quality of applicants.

Expand full comment

I think there is a big difference between investing and charitable giving that makes this analogy not as applicable.

When doing personal investing, you are trying to maximize your expected utility, but the way you operationalize this is by trying to make money. If your goal was solely to maximize your expected amount of money, you really would choose the one stock with the highest expected return and ride it until some other stock began to look more promising. But since your goal is actually to maximize utility, and since utility is a sub-linear (perhaps logarithmic) function of money, you act more safe and diversify.

When charitable giving (as an effective altruist at least), you are trying to maximize your expected positive impact on society, and the way you operationalize this is by finding the interventions with the highest expected positive impact. There's no disconnect here like there is between wealth and utility in investing. And the way to maximize your expected positive impact on society is to find the most promising cause, and donate all your money to it (that is, unless you donate enough money for there to start being diminishing returns, which is not a problem for most smaller-scale donators).

I say all this as someone who does diversify his charitable giving somewhat, but I'm not sure if I'm doing the right thing.

Expand full comment

I think you're confusing "rational" with "return maximizing". The point of diversification is hedging against losses. A return-maximizing investor should sink all their money in the offer with the highest return, possibly lose it all and say "Shrug, it was the correct strategy anyway."

Expand full comment

If you apply the reasoning of "I should fill up the most expected-effective category before moving on to anything else" to an investment portfolio, you would end up with a diversified portfolio. You first $1 goes into, say, a stock index fund. The expected utility of an additional dollar into stocks isn't as high as the expected utility in bonds, because adding marginal risk reduction increases expected utility more than adding marginal expected return. So your next $1 goes into bonds.

It's debatable when and to what extent charitable projects experience diminishing marginal utility. But to the extent that they don't, you should put all of your dollars into whatever category has the highest expected utility.

Expand full comment

The Kelly Criterion says that your bankroll growth rate is maximized when your bet sizing maximizes E(log(bankroll). If you go all in on one investment, there is a chance your bankroll goes to zero, and the log of zero is negative infinity, so E(log(bankroll)) is not maximized.

In charitable giving, you just try to maximize E(utility) instead of E(log(utility)). There are enough other people doing charity work that THEY provide the diversification. So if an omnipotent being offers me a coin flip, wherein if I lost the flip I get nothing, and if I won the flip I'd get 10^69420 QALYs for each dollar I wagered, I am definitely going all in.

Expand full comment

Forgive myth mathematical illiteracy, but what difference does the log operator make here?

Expand full comment

A logarithmic utility function makes you more risk-averse. If there's a coin flip that doubles/halves your bankroll, the linear utility maximizer would always do it because E(bankroll) is 1.25*pre_bankroll. But the logarithmic utility maximizer would be indifferent.

Logarithmic utility is appropriate whenever your future income expectation is directly proportional to your future bankroll, which is a halfway decent approximation both investing and professional gambling. But it ignores living expenses (which should make you more risk averse) and it ignores diminishing returns where larger investments result in smaller rates of return (which should make you more risk-tolerant, and which are pervasive in both investing and professional gambling). Most professional gamblers bet half-kelly or less but I'm on the more risk-tolerant end of the spectrum.

Expand full comment

An investor is trying to maximize his own risk-return tradeoff, thus his personal investments need to be diversified. But an Efficient Altruist isn't trying to maximize the risk-return tradeoff for the projects he funds. Since the benefit accrues to the world, the proper portfolio is all of charitable giving, or something like that. Since the amount of money Scott is giving is small, relatively speaking, then the "rational" thing to is almost always to put it in one place, to get the world portfolio closer to optimal.

This argument gets weaker if you're talking about Bill Gates-level donations, where you might actually fill up a bucket to it's optimal position.

Expand full comment

diversification is called the only free lunch.

Expand full comment

Probably progress probably trumps direct charity in the long run... although I'm sure EA has a good response to this

Expand full comment

(1) As Roko pointed out on Twitter (https://twitter.com/RokoMijic/status/1459121955819425796), as regards carbon emissions, it isn't very consistent to own Bitcoin/Ethereum or any other Proof of Work (PoW) cryptocurrency, while anguishing about NFTs in particular. (Assuming you do, I don't know). Especially since these NFTs would presumably reside on Ethereum, which is moving towards non-polluting Proof of Stake next year, so if anything driving up activity on it increases the chances that it flips Bitcoin sooner (this will be good from a climate perspective since $BTC has no plans to move away from PoW).

(2) Or you could, even today, sell your NFTs on a non-PoW chain such as Solana, which is probably the chain that has the best chances of flipping $ETH in turn. NFT market on Solana: https://solanart.io/

(3) You could also buy $KLIMA (https://www.klimadao.finance/) which buys up carbon credits, in amounts commensurate to whatever you think the carbon impact of any particular NFT you mint and sell is.

(4) Last but not least, customary reminder that moderate climate change towards a warmer world is almost certain a net good, but I suppose that's a bit OT here. In any case, the US military alone emits far more carbon than all cryptocurrency combined, and is also probably a net negative for global welfare whereas crypto is a massive net positive. From this perspective, wouldn't it be more moral (if admittedly also more dangerous) to try to maximize tax evasion?

Expand full comment

> Last but not least, customary reminder that moderate climate change towards a warmer world is almost certain a net good

I tried to track down this claim, and I found: an article by Matt Ridley, which cites a paper by Richard Tol; multiple corrections to the Tol paper due to data entry errors, which Tol says do not change the conclusion; and two posts by Andrew Gelman claiming that the paper and correction have additional errors that throw this all into doubt.

Is that indeed the source for your claim? If so, the information sounds woefully inadequate (like the question is under-researched, or has been summarily ignored due to apparent bogosity, or both).

It hardly needs to be said that this is a fringe belief, and unsurprising that Matt Ridley and Richard Tol are both accused of being climate science deniers (etc.), but whether that is a cause or an effect is unclear to me.

(Matt Ridley is a journalist, libertarian, and viscount. Richard Tol is an economics professor. Andrew Gelman is a statistics and polisci professor.)

Expand full comment

I commented in greater depth on the old AGW specific thread. But TLDR: Many of the risks are overstated. Coastal megapolises are sinking 10x+ faster due to groundwater depletion than sea level rise. Warmer world = wetter world (cold produces drought, which is the real killer of civilizations historically), with a greater fertilization effect.

No, it's not based on any of those people, but paleoclimate evidence (as in, things that actually happened, as opposed to speculative models trying to incorporate many kinds of phenomena that are very poorly understood even in isolation, let alone as part of a complex system), e.g. Sahara being a verdant garden hosting elephants and hippos when the world was 2C warmer. Deep Future by Curt Stager is a good introduction.

Expand full comment

My biggest problem with entirely rational behavior is that it seems so, well… joyless.

Mr Spock was only fun because we could laugh at him. Besides, that hot head Jim Kirk usually had a better idea. [mostly tongue in cheek]

Expand full comment

So the point is, I guess that I’m certainly not going to find fault with an attempt to do good with an approach that might possibly be construed as non optimal.

Expand full comment

So you guys practice a joyful rationality? I’m honestly a bit confused

Expand full comment

It seems pretty Spock like at times.

Expand full comment

I mean if you get down to assigning a numeric value to each action, isn’t that kind of like Spock?

Expand full comment

Okay. I’ll read the LW links

Expand full comment

I think I can compound my wealth at 15% a year, while the cost of utilons probably only goes up as fast as 3% expected inflation + 2% global real gdp per capita growth*. So for each year I delay, I can buy 10% more utilons. I'm 36 and have nearly US$5M, so if I delay till I'm 86 that's 1.1^50=117x more utilons. One might counterargue that a utilon provided today can compound itself and provide more utilons in the future, but that's less clear than financial compounding and it's going to heavily depend on the type of charity. Also knowledge about how to spend the money will be better in the future. The plan is to wait until I'm at least 80, then give away 10% of my net worth per year.

* (source: worldbank https://data.worldbank.org/indicator/NY.GDP.PCAP.CD

From 2010 to 2020, global GDP per capita in current US$ increased from 9558 to only 10909. That's an annualized growth of 1.3%/year. If we blame covid and cherrypick 2019 instead it's still only 1.97% from 2010-2019. Seems like there's a decent chance it will go negative in the future due to the lack of a demographic transition in some poor countries, but I'm not confident enough about it to factor that in to the model. There should be prediction markets for global GDP per capita in 2100 conditional on adopting policy X.)

(Have the EA people done empirical research on the utilon inflation rate and the utilon-compounding rate for various kinds of charity?)

Expand full comment

Is this a general argument against high-risk high-potential-return giving (e.g. to existential risk charities) or do you think what Scott's doing here is clearly significantly worse than that?

Expand full comment

For projects that have the potential to improve the lives of millions of people, the expected utility will often be much higher than a hundred lives saved, even if the improvement per person is really small.

Expand full comment

This is wild. Excited to see where this goes

Expand full comment

This is truly one of my favorite developments coming out of the rationalist community/Progress Studies/EA sphere.

What's the point of all this money sloshing around the economy and crypto if it's not gonna fund moonshots?

I thought about starting a rationalist community DAO, bootstrapping the token value a la any of the tokenomics mechanics printing money in crypto, create a community fund, and vote on projects to disburse tokens to.

Expand full comment

My hope is that this year I do a very normal grant round to set a baseline and make sure I can do this at all, and then next year I figure out some kind of crazy innovative idea like that (though probably with less crypto).

Expand full comment

>make sure I can do this at all

My preregistered hypothesis is that this becomes one of the most successful and important things you do (as determined by you subjectively).

remind me! 10 years

Expand full comment

Do we have remind me bots for substack???

Expand full comment

No, tongue in cheek. But my hypothesis is genuine.

Expand full comment

Now I’m wondering what it would take to set one up. Seems useful

Expand full comment

Apply for a grant to develop one

Expand full comment

Given that substack doesn't have an API ... probably a web scraper, a server and a droplet or two? This sounds like a weekend project at the outside for an interested hacker (especially if we only care about setting it up on this substack).

Expand full comment

Let me know if you get anywhere on the rationalist DAO idea.

One thing I've been mulling over recently is launching a dog coin which is actually a Trojan horse for distributing funds to EA causes. The amount of money that can accrue to a (successful) dog coin is staggering. The trick is just to figure out how to make it more of a Schelling point than all the other trash floating around in that category. It would probably help that we could get Vitalik and a few other crypto "celebrities" on board - I think a lot of people would be glad to see the stupidity of the current market put to good use for once.

Expand full comment

Google rainbow rolls nft. Similar idea

Expand full comment

> starting a rationalist community

Maybe something set on 1 km^2 of farmland (to give lots of room for expansion) with a starting population of 100-1000. The intention being that it would grow to a population of 10,000+

It could be set up as model city that would serve as an example of what a polity under Archipelago might look like (https://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/)

It might include the creation of as Rationalist Religion (in the sense of rituals/beliefs/values for making a community cohere NOT in the sense of a cult).

Maybe cryptocurrency could be involved somewhere.

It would have a constitution meant for other organisations with similar but not identical goals to modify and reuse. (So eventually there might be lots of separate Archipelago communities (e.g. an LGBT one, a social conservative one, a Georgist one, etc)-- all with their own separate principles but all agreeing to Archipelago Communitarianism).

Expand full comment

I tried this once. The twice five miles of fertile ground were really pleasant, but the woman wailing for her demon lover eventually ruined it.

Expand full comment

Greg Cochran has suggested that if you replace nitrogen with helium in the air you breathe you might increase cognition and alertness. If this is tested and proved true there would be enormous social benefits as we could put scientists (and AGI safety researchers!) in sealed rooms with such air. I've been trying to get someone to test this idea. As an economist, no same person would let me mess with the air people breathe. But if anyone reading this has the qualifications to run the experiment perhaps ACX grants would be the right place to apply for funding.

Expand full comment

I can't deny this is my kind of experiment, but I feel like it would be more kabbalistically appropriate to apply for a Helium Grant https://www.heliumgrant.org/

Expand full comment
Comment deleted
Nov 12, 2021
Comment deleted
Expand full comment

From the website:

Q: Are you ever going to do Helium Grants again?

A: I don't have any plans to start up again, but you never know.

Expand full comment

Nadia has ceased to make Helium grants. She's working on a new book, I think.

Expand full comment

It's a nice idea, but my application for a MacArthur Grant to invade the Philippines has been repeatedly declined.

Expand full comment

Haha!

Expand full comment

This strikes me as remarkably difficult and unproven relative to old fashioned stimulants (ritalin/adderal/modafinil). But I'd love to see it studied anyways!

Expand full comment

I’m just imagining these really smart people talking about world changing ideas with high squeaky cartoon voices.

Expand full comment

Greg has suggested that if it works speaking with a high squeaky cartoon voice would become a sign of power and authority.

Expand full comment

You didn't mention why it's worth looking at: nitrogen narcosis (https://en.wikipedia.org/wiki/Nitrogen_narcosis). Basically, divers that dive with nitrogen find that at higher partial pressures the nitrogen is an intoxicant. Every gas except for helium & neon is an intoxicant to some effect: Xenon at low pressures can knock people out. So if 2 atm of nitrogen pressure makes you tipsy, then what does the 1 atm we're all experiencing all the time do? And if we got rid of it by replacing it with helium, then could we make people knurd?

Heliox wouldn't be that expensive to test out. Professional divers use it, and there's medical uses as well. You can either find some diver's tanks and hook people up, or get some medical grade mixers to mix oxygen & helium from tanks and run a nasal cannula.

The more interesting question is what you'd want to test. Let's suppose we can get, I dunno, 30 people to wear nasal cannulas for 2 hours. I'd do something like this: set up the mixers appropriately so that oxygen is constant @ 21%, but you can swap the remainder smoothly between helium & nitrogen. You set things up so that people are on normal air for 1 hour and heliox for the other, with which is first being random.

As for the tests, maybe some combination of standard Raven's matrix questions and reaction time tests?

Can't seem to find heliox pricing anywhere, but I suspect you could do this for <$100K.

Expand full comment

I think you could self experiment for <$100 and notice it if there was any big effect worth noticing. I found a study showing a massive improvement in cognitive function of divers under ~3.6atm of pressure using heliox vs compressed air. (https://europepmc.org/article/med/32176950) but no study at 1atm.

Expand full comment

The Stroop Test? What a can of worms this opens. The things I learn reading ACX, I tell ya.

Expand full comment

Here is a 1975 study of replacing nitrogen with helium at 1 atmosphere:

https://europepmc.org/article/med/1130736

Expand full comment

> So if 2 atm of nitrogen pressure makes you tipsy, then what does the 1 atm we're all experiencing all the time do?

It doesn't, most people need 4 atm (or 0.7->2.8 partial) to experience it.

Expand full comment

It's a very implausible hypothesis anyway, contrary to the way almost all physical and biochemical systems operate. Systems in stable equilibrium, and subject to natural selection, are optimized for performance under normal conditions -- whatever they are, and one thing we can be sure about for humans is "normal" includes 0.8atm of N2. Deviations from normal, in either direction, can reasonably be expected to degrade performance.

Expand full comment

This leads to the prediction that since human bodies are optimized for gravity on earth, measured athletic ability for humans on the moon would be lower than that on earth.

Expand full comment

And so it would. I am confident that the winners of the World Cup would play far less competent soccer on the Moon. Gold-medal winners in the vault and floor exercises on Earth would probably injure themselves doing relatively simple routines on the Moon. Even Usain Bolt would probably run a slower 200m on the Moon than on the Earth, technique and timing being far more important in sprinting races than people ordinarily think.

I take it you are referring either to the crudest of possible measures of "athletic ability" like how far you can throw a ball or how high you can jump, or you are think of athletes who trained and got used to lunar gravity. The former is just sort of silly, but in the latter case I am not convinced that even with extensive training Earth-born athletes could out-compete native Lunar-born athletes (if those existed).

Expand full comment

For what it's worth, we can also note that in the earliest years of spacelight people hypothesized that microgravity would be good for people, because the heart had to work less hard, there'd be less wear and tear on the body, et cetera. But as it empirically turned out, microgravity is on balance bad for the body and degrades the health, and these days we recognize that a body tuned for 1g does *not* do well in 0g, contrary to the naive hypothesis.

Expand full comment

Counterexample: plants grow faster under increased CO2. This is seen both in the fossil record as well as in direct experiments.

Expand full comment

I'll gladly self-experiment with a 79-21 mix of helium and oxygen, but I don't need the money. I'm surprised if divers and astronauts don't already do this, since helium is so much lighter than nitrogen. It's $7.57 per cubic meter, and we breathe 11 cubic meters per day, so even if the helium was not recycled at all, the cost would be only $7.57*11*0.79 = $66/day

Expand full comment

Try not to kill yourself, which is quite possible if you're playing with pure helium tanks. Get an oxygen tank too, along with a medical-grade gas mixer. Remember that the asphyxiation response is caused not by lack of oxygen but by the presence of carbon dioxide. If you're breathing a 5% O2 95% He mix, you'll happily pass out and die without ever feeling short of breath.

Expand full comment

People on Everest seem to notice the lack of oxygen

Expand full comment

I think AW is right about this. I was once in a room where someone left a nitrogen tank open - fire extinguisher propellant - I started to get light headed and was close to keeling over when someone noticed the leak and turned it off. No distress at all though. It’s the CO2 build up that causes distress.

I’ve read that death penalty states ruled out using nitrogen gas as a method of execution because death by that means is too pleasant.

I would think too much helium would be the same.

Yeah, thin air at high altitude is noticeable, but that is a different effect entirely.

Expand full comment

> I’ve read that death penalty states ruled out using nitrogen gas as a method of execution because death by that means is too pleasant.

Any polity that has the death penalty and uses that argument is not, IMO, morally qualified to have the death penalty.

Expand full comment

Okay, I checked on this. Alabama, Oklahoma and Mississippi have authorized execution by Nitrogen hypoxia. No state had used it yet as far as I know.

The business about states not using it because it is too pleasant may have been an editorial comment in an earlier article. The method is expected to result in sedation followed possibly by euphoria. I think the author of the recalled article thought this would go against what death penalty advocates wanted from an execution.

So some speculation on that writer’s part and imperfect memory on my part.

Expand full comment

t sounds unlikely. Generally states that want to apply the death penalty are hounded by people making disingenuous arguments about how cruel <whatever method is proposed> is. While the same people also work to make alternative methods impossible.

Personally I'm generally opposed to the death penalty because there can be irreparable miscarriages of justice. But if you ARE going to kill someone judicially, there seems little reason to sweat over the fact that death is generally unpleasant, whether you are being executed or not. Lop off their heads or whatever works, in most cases it's better than their victims got.

Expand full comment

Not the same way. When you're at altitude you notice you're out of energy, and often out of motivation -- a serious problem in mountaineering by the way -- but you don't feel like you're suffocating.

Expand full comment

In fact, one of the biggest dangers of mountain climbing is *not* noticing your body is low on oxygen. You just get dumber and dumber until you die.

Expand full comment

Now *this* is the kind of Mad Scientist malarkey I'm here for!

Pros: come up with genius insight into pressing problem

Cons: people too busy laughing at your Donald Duck voice on helium to take you seriously

Expand full comment

On the Internet, no-one knows you're a duck.

Expand full comment

I laughed at this one

Expand full comment

Oh well then you might enjoy Cody's Lab where he breathes *all* the noble gases. The results with helium and krypton are most fun, I think:

https://www.youtube.com/watch?v=rd5j8mG24H4&t=12s

Expand full comment

I'd suggest incorporating a grant making organization to get around the taxes.

Expand full comment

Probably I should do this before next time, but I understand it's pretty hard and might cost more than I save.

Expand full comment

You just incorporate and then register with the IRS. It costs a few hundred dollars usually. You would be able to deduct what you give out from your taxes and to get out of gift tax in many cases. Presuming you don't hit the percentage of your income limit and the average grant is $25k you'd be saving about $90,000.

Expand full comment

So I am not a tax expert and this is not tax advice and all that, but my understanding is that you won't have to pay gift taxes anyway, unless you end up making *way* more gifts than we're currently talking about - google "lifetime gift limit" and the first result's precis says "Most taxpayers won't ever pay gift tax because the IRS allows you to gift up to $11.7 million over your lifetime without having to pay gift tax."

Expand full comment

That's what I thought, too. There's also a $15,000 per year exemption before the lifetime exemption kicks in, which I think is per recipient as well. So you should be able to give you $250k grant pool as 17+ grants of $15k or less without cutting into your lifetime exemption.

Expand full comment

>There's also a $15,000 per year exemption before the lifetime exemption kicks in

That's right. Here's what the IRS says:

>How many annual exclusions are available?

>The annual exclusion applies to gifts to each donee. In other words, if you give each of your children $11,000 in 2002-2005, $12,000 in 2006-2008, $13,000 in 2009-2012 and $14,000 on or after January 1, 2013, the annual exclusion applies to each gift. The annual exclusion for 2014, 2015, 2016 and 2017 is $14,000. For 2018, 2019, 2020 and 2021, the annual exclusion is $15,000.

https://www.irs.gov/businesses/small-businesses-self-employed/frequently-asked-questions-on-gift-taxes

Expand full comment