About 21 (ALLFED/Morgan Rivers). Morgan is not actively working at ALLFED anymore (which might be the reason you did not get a response), but the project I think this grant was referring to is mostly finished and has resulted in this peer reviewed paper: https://www.sciencedirect.com/science/article/abs/pii/S2211912424000695
I'm the PI on number 20, microbial biodegredation of plastics, If anyone want to ask questions or comment, then reply to this comment, and I'll try to check my Substack throughout the next few days.
Hey James! I actually just emailed Scott about this and he pointed me at this post. I have a degree in biotech and I'm super interested. Email me at dydavidyoussef@gmail.com
> they expressed concern it was net negative (!) by taking away oxygen and spotlight from potentially more effective orgs
This style of "good thing is bad actually" argument is extremely common, and my general rule of thumb is to dismiss it out of hand unless it is backed up by a very detailed and specific case.
This manner of argument is not necessarily wrong, just nearly impossible to evaluate. It's a favorite argument of ideologues, who will forever criticize real-world progress as half-measures or corrupt compromises, which their preferred solution of course never would be because it would cut through any political economy considerations with a flaming sword of light.
I agree. There's also the related phenomenon where something that exists in the real world right now, warts and all, is compared to an imagined idea, without any warts.
It might be interesting to mass-outreach non-grantees asking if there was meaningful progress towards their proposed project even without the ACX grant. In trying to predict successful grants, a false positive rate is very useful, but so would a false negative rate as well. You might be able to adjust towards whatever common factors the successful non-grantees and successful grantees have, while biasing against what the failures had in common.
But it seems like you’ve pulled the same lesson that most VCs have. The founding team is everything, and evaluating the idea itself a-priori only goes so far. Many would-be startup founders who aren’t in any Ivy League or FAANG are bitter about how VCs filter for these things, but it’s simply a matter of picking the best predictor for project success. If you couldn’t succeed in the very straight path towards an Ivy League, how can VCs expect you to succeed in the far murkier path of building a startup?
This gives me the idea of a email response time index, where a person’s manic obsession with doing things NOW is rated by how fast they respond to emails of different types of emails. Maybe a chrome extension with a public ranking?
Agree with the second paragraph ("The founding team is everything"), but I have to add than just because a team is great at one sector (i.e. industrial products) doesn't mean that the same team will be great at a different one (i.e. social media, government). Just look at Musk.
Compared to the average Startup founder, Musk has definitely produced like 10,000x the average in a few different industries. A previously successful exit is the #1 indicator for a good VC bet. On the whole Musk has been in charge of more than one seriously successful company, with a good number of failures.
It's never about causation though, always correlation, as what causes success in business is extremely varied and industry specific and often due to luck. His ability to close a $10+ Billion funding round in a single day makes him an excellent bet for essentially anyone though, as the downside risk, of any of his companies imploding due to running out of money is very unlikely.
We should be very careful to assume a market which underperforms index funds is efficient. Also if it were simply filtering on school quality rather than pure in group preference. we wouldn't expect to see such clustering around specific schools rather than school tier. But we do.
It might be that there's just lower margins in VC to be had in general. While it underperforms index funds, it doesn't do so by a lot, and the variation in returns are significantly higher.
I'm not so sure about school tier. Different schools are known for different things. Yale is known for its humanities and pre-law track, while MIT is known for engineering. They will naturally attract people with different interests that are more or less likely to succeed at a startup.
If you admit allocation by school is not a tier system but by school specialty then you're saying that it's not about whether they could succeed at the path but the result of education. Which is a fundamentally different claim from "getting into an elite school shows inherent traits necessary for success."
And that's without getting into how random admission chances are these days. How much difference is there really between someone who can get into Harvard vs Yale? How many people choose not based on preference but which lets them in?
And VC shows a stronger power law effect than indexes. Effectively, all index funds perform the same as their index. But in VC the majority of firms perform far below average while a few perform far above. If you accept this is true (and it is) that means inherently a significant part of the capital is being inefficiently allocated. That is, all the capital going to funds that turn out to not be top 10% (or whatever the cut off is). So drawing conclusions from where 90% of firms choose to put their capital is a bad heuristic.
Of course, part of the reason that happens is discovery of which VC models work, which appears to be heterogeneous, implying there is not a uniform standard but multiple niches. This is the opposite of what you'd expect in a market with straightforward returns to easily trackable prestige. And in fact implies if you're starting a fund you'd want to differentiate by finding signals other firms have missed. Basically finding your own niche.
There may be multiple causes going into the correlation.
Getting into a top-tier school requires the ability to do hard work diligently, and be quite intelligent.
Among them, perhaps different schools select for different personalities and skills. There's no reason to expect the best writer of the high school class of 2025 should be a good startup founder, and they would probably have a good chance of going to Yale, while there's good reason to expect the best software engineer of the same cohort to get into MIT, and they would have significant advantages in building a startup.
Essentially, two layers of selection effects.
> But in VC the majority of firms perform far below average while a few perform far above. If you accept this is true (and it is) that means inherently a significant part of the capital is being inefficiently allocated.
Not necessarily so. It may simply be that the average returns in VC are lower than the market average, or that investors are willing to accept lower average returns for the possibility of much higher returns on the tails. It may be that investors simply enjoy investing in VC more than an index for reasons of prestige and novelty, so are willing to sacrifice some average returns in return for that.
90% of firms don't use schooling to determine where to allocate capital. The top performing firms use that as an indicator for most first-time founders, among many others. I'm not saying there aren't better indicators out there, but the firms who use this one obviously value it, and they have a much larger financial incentive to understand it than you or I. There may not be better indicators than what they're already using, and those top 10% performing VC firms get there due to luck, or due to their ability to better push their portfolio into subsequent rounds.
How would you distinguish this from simple in group preference? The story that VCs tend to favor the schools they went to or feel most comfortable evaluating people similar to themselves?
If VC returns are below market average structurally then the entire industry is inefficient by definition. There's already plenty of ways to risk adjust returns too. And the claim "90% of firms don't use schooling to determine where to allocate capital" seems insupportable to me. You're really going to claim that heavily recruiting from Harvard is uncommon? Or a secret?
I tend to take VCs at their word about what they're doing which is basically they have a thesis they follow. If that thesis consistently includes "people like me should get large amounts of capital to use with ideally minimal consequences for failure" I do not necessarily take that at face value. Especially if doing that tends to underperform.
Another question: a16z and YC both offer a supportive ecosystem and claim that admission to that ecosystem is a large part of why their founders are successful, rather than innate traits, which means who they select is only part of the cause of success. If this is true, and if schools affect who is best suited to certain kinds of work, do you see how that creates a system where two equally talented people might end up in unequal positions? If it's not true then why do elite VC firms and schools (who presumably are in a better position to know) claim otherwise?
I suspect the email response thing is especially susceptible to Goodharting. In the observational case, people respond quickly to emails because they care deeply and are working obsessively on the problem. In the Goodharted example, people respond because they know their response tendencies are being evaluated. In academia many professors rule their labs like a petty dictator and expect prompt responses to emails at all hours of the day; since the rapid responses are coming from a place of fear, it's not at all clear that the terrified students are more effective scientists (probably the opposite, in fact).
FWIW if I had a few of these "good heuristics for grant effectiveness" I might not even publicize them to preserve their usefulness.
On the topic of climate change activism and plays, I've also come across this seemingly odd intersection.
Several years ago (maybe 2019 or so) I was working for a UK eco-home builder that was building a development of zero-carbon homes in the North of England. Someone asked us for an interview because they were writing a play about a brother and sister who were trying to build a zero-carbon home, where the brother was extremely motivated to tackle climate change. At the time I thought it was strange, but clearly there's a pattern.
I've also been to climate protests where Extinction Rebellion's 'Red Rebel Brigade' put on a kind of street theatre. They call themselves an 'international performance activist troupe'. They're the people who dress up in all red costumes that serve as a calling card for Extinction Rebellion marches. You can see how they'd be a natural fit for theatricality and activism. They are both performing to an audience.
Not that surprising given how much art majors tend to skew left. Sort of a "when all you have is an interpretive dance degree, everything, including carbon-emission induced climate change, looks like a problem that can be solved via interpretive dance" situation.
Hey, Trevor here, founder of Highway Pharmaceuticals. First of all, still very grateful for the grant. The money helped, but the credibility also helped a lot.
Second of all, math isn't right on fundraising. Our/ValueBase's valuation isn't public, but ValueBase has publicly *raised* $14 million. It's probably valued at 3-5x that.
Current practice is embryo transfer, in which a cow with desirable traits produces the embryo, which is then transferred to donor. I am told that using this method a cow can produce 30 calves per year.
I think they might mean that because they can breed cows for some positive trait (like milk production) faster, cows' milk production could go up an order of magnitude.
Some quick googling suggests that cows did 10x milk production since the middle ages, but largely due to producing milk year round. Further increase would need to increase udder size and/or number of milkings per day, say 2x udder size and 5x the number of milkings - I guess 1 order of magnitude may be possible but two seems unlikely before it becomes either impractical or unethical.
From what Metacelsus and Banko Killdeer comment, I suspect that what they are referring to by productivity here is the productivity of the breeding process. No doubt the end goal is productivity of actual food production, though.
> Are lobbying organizations a better bet than other types of nonprofit (within the constraints of ACX Grants)?
Yes, development and lobbying are shockingly easy if you just take the time to understand the systems. The system is set up to be responsive!
> One disappointing result was that grants to legibly-credentialled people operating in high-status ways usually did better than betting on small scrappy startups
You admitted you partly selected on this so this conclusion is tainted by selection bias. If you assume two groups have roughly equal percentage of successful people but one group is larger (non-credentialed people) and you likely have worse skill for non-credentialed people by default then selecting a larger share of the smaller group will lead to more successful examples per capita. This is because you'll have more non-credentialed false positives and false negatives in the non-credentialed group as well as fewer false positives in the credentialed group as well as more true positives in the disproportionate credentialed group.
Basically you've rediscovered the social purpose of signaling and concluded that it's better rather than the self-reinforcing mechanism social signaling is. Put another way, you probably included EA credentials as legible because they're legible to you. They are probably less legible to Harvard. This type of thing shows the use is in signaling TO YOU, not the difficulty or generality of the credential, because one of these is literally global and one of them is very niche.
Likewise, any number of niche credentialing organizations create the same effect within their communities because the credential is a signal/coordinating mechanism, not a sign of merit. If it were simply based on merit you'd expect these more niche signaling mechanisms not to work because they'd be strictly inferior to the more global and difficult to achieve ones like Harvard. But instead you see a diversity of credentials with local prestige.
> Someone (I think it might be Paul Graham) once said that they were always surprised how quickly destined-to-be-successful startup founders responded to emails
I think that was Sam Altman (at the time still working in YC):
"ALTMAN: You know, years ago I wrote a little program to look at this, like how quickly our best founders — the founders that run billion-plus companies — answer my emails versus our bad founders. I don’t remember the exact data, but it was mind-blowingly different. It was a difference of minutes versus days on average response times."
Re: climate change, I think spiking demand for energy to power data centers over the next few years is going to require some changes in messaging/strategy to avoid seeming to be standing in the way of progress.
It's mostly cosplay. The game theory of AI means the Sam Altmans of the world will race ahead doing whatever they want while paying lip service to a movement that spends most of it's time spinning up yarns on the internet.
They'll justify maximalist expansion through some combination of fears of China and telling everyone that in the future they won't have to work.
And when that future comes, instead of funding UBI with higher taxes they'll do everything in their power to keep the carried interest loophole and avoid the unspeakable tragedy of paying Clinton era marginal tax rates.
I don't think climate activism is mostly cosplay. US per capita carbon emissions have gone down by almost half even as per capita GDP has increased (see https://www.statista.com/statistics/1049662/fossil-us-carbon-dioxide-emissions-per-person/). This took immense effort, upset a lot of people, and should be considered one of the most surprising and impressive political victories of the past few decades.
I don't think AI safety has been cosplay either. It sounds like your only argument is that maybe AI companies can ignore us; even if that's true, it doesn't make our work "cosplay". But also, most AI companies have safety teams, there's some decent safety research being done, several national governments have AI safety institutes, and there are various AI safety related laws at different stages of completeness. While I agree that our chances are far from certain, I think "cosplay" requires some kind of unseriousness that I deny.
I think the fact that your main worry is whether there will be Clinton era marginal tax rates suggests you're not coming at this from a place of giving a fair shake to our concerns.
I seriously doubt the median climate activist is interested in the technological advances that have lowered carbon emissions, or the fact that we've outsourced a lot of our carbon emissions (and other pollution) to China. The Sunrise Movement spent most of 2021-2022 tying themselves in knots over antiracism nonsense instead of helping Biden pass climate legislation.
An AI safety team to me looks more like a Wall Street compliance department than anything else -- something they do ostensibly to avoid breaking the law but is more geared toward shielding management from liability while making it hard to even *tell* if their breaking the law. Witness how cagey Open AI's CTO was about whether they used YouTube for video training.
The reason I mention Clinton era tax rates is that since 1994 the defining issue for the one percent -- the one thing that they will go to the matt for in Congress, which we see now with the BBB -- is avoid a higher marginal tax rate.
Given the track record, my default position is that talk of UBI is just PR.
Edit: On national governments having AI safety institutes -- the JD Vance argument of "screw that" seems to be winning in the US. Every day I see memes about how the EU is a regulatory disaster.
See this discussion of the claim that we "outsourced our carbon emissions to China" - https://www.noahpinion.blog/p/no-the-us-didnt-outsource-our-carbon . I don't know how you can deny the many carbon use regulations that have impacted every part of life in the US.
I also think you don't understand the role of AI safety teams. They're certainly not "compliance" - so far there are very few regulations to comply with, and many AI companies get along fine without any safety team. If you look at these teams' work, much of it is speculative research about superintelligence - see everything from Anthropic's interpretability research to OpenAI's chain of thought faithfulness and scaleable alignment work. Some of it also has near-term implications but I think the long-term work stands on its own.
Not that it matters, but I think your claim that the "defining issue for the one percent" is tax rates doesn't check out either. The one percent are currently about 50-50 Democrat vs. Republican (see https://catalist.us/whathappened2024/, but this is in the context of a good year for Republicans, and in 2020 it was more Dem-skewed), even though Republicans have consistently been the party of tax-cuts-for-the-rich. I think the top one percent don't really have a single defining issue, but insofar as they do it's mostly culture wars stuff like everyone else.
For all the time and money spent AI safety -- did *anyone* predict that the first casualty would be college students cheating on exams? Have any of the AI safety people suggested legislation or other systemic mechanisms to prevent that? ("Just use bluebooks" is not systemic.) It's like the phones problem. Everyone knows phones are driving us crazy but it's only in the last couple years that individual school districts are doing anything about it. There are more systemic solutions that could be done but everyone knows the lobbying apparatus would prevent it from even getting out of a committee.
As someone who was married to a Chinese woman and spent significant time there, I simply disagree with Noah. It is plainly obvious to anyone who visits China that there is an enormous amount of unreported emissions happening there. Noah has expressed doubts about China's economic reporting in general, so I don't know why he believes them in this one case.
In 2008 the the US Embassy in Beijing started tweeting out daily air pollution readings from their own monitors. They dramatically differed from Beijing's reported numbers (the embassy stopped doing that in March btw).
On the other hand, there is no similar mechanism for carbon emissions. Given how we've seen China (not) handle something that directly leads to emphysema and asthma for their own population, why would we expect them to be vigilant about something that doesn't do have such direct effects?
The 50/50 one percent thing is better understood in terms of the source of their wealth. One faction is the Abigail Disney/Christy Walton class -- heirs who donate to do-gooder orgs like the Sunrise Movement and the Hewlett Foundation. It's like the Ford Foundation -- everything that starts conservative ends up liberal. And they've *hurt* Democrats in terms of electability and being able to wield power. That's what "the groups" discourse is about.
Then there's the Peter Thiel/Ken Griffin faction. They are working every day in their business and can see direct, material results from their efforts to influence tax law and regulation.
Open AI, Google, and Meta all spend a *lot* on lobbying, and their AI safety teams can be viewed as an extension of that. "See? We really care about safety so how about we table legislation for now."
Even if I buy the AI safety stuff you're selling, I still think the game theory means most of it will be for naught. When push comes to shove and governments show they're willing to spend significant sums on mass surveillance, hacking of nation states, and autonomous killer drones, those companies will provide a solution. Theres'a reason Palantir's stock has gone parabolic.
I was in Alberta when an interesting ranked choice election happened. It was for leader of the Progressive Conservatives, which was effectively for the Premier of the province. The top two contenders went attack dog on each other, prompting all their voters to put the third place guy as their second choice. So when neither won a majority the third guy cleaned up on the second round
This dynamic could kill excessive partisanship. In this particular instance the third place guy, Ed Stelmach, was not a quality candidate compared to the other two imo. One of the most confidence shaking moments of my life was when he raised the min wage as oil prices crashed and help wanted signs vanished across the province. The guy had no competition and was the leader of a conservative party in a conservative province. He also didn't just neglect to comment on a 'human rights' commission ruling that banned a pastor from public comment about the gay agenda for life, including emails - he vocally supported it
Presumably though, quality candidates would soon learn that excessive partisanship can hand the election to the guy who got 20% in the first round. This does make me thing approval voting is far better even if ranked is a solid upgrade
What you're describing only happens in Condorcet systems. IRV (Instant Runoff Voting) wouldn't lead to that, as the "third place guy" would get eliminated in the first round
> Someone (I think it might be Paul Graham) once said that they were always surprised how quickly destined-to-be-successful startup founders responded to emails - sometimes within a single-digit number of minutes regardless of time of day. I used to think of this as mysterious - some sort of psychological trait? Working with these grants has made me think of it as just a straightforward fact of life: some people operate an order of magnitude faster than others.
I know a few people who respond to emails quickly but not very well, and I also feel like this sort of “respond to emails quickly” mentality lines up well with people who don’t take their work-life balance very seriously either—which is probably a good trait to have if you want to be a startup founder.
I’ve been through periods where I respond very quickly and periods where I need reminders to respond. It is absolutely a window into mindset, energy, motivation, dedication and prioritization.
Someone who responds quickly is on top of things. Someone who doesn’t is mentally overwhelmed, distracted, disorganized or depressed.
There's a lot to like here, the anti-mosquito drones in particular look like they'd have both public-health utility and market potential. The AI safety advocacy probably dwarfs anything else going on here in terms of magnitude, it's just hard to know what actions are actually going to be relevant so the existence of advocacy groups that can pivot as the industry changes is really the only way to handle that. (I don't have any background in grants, and am terrible at writing and funding them, and have no worthwhile opinion on whether you should keep doing this, but I do think you continue in some fashion to try to place people in key policymaking spots ahead of the big AI decision points.)
I'm somewhat baffled that the animal welfare team convinced farmers to selectively abort all the male chickens. I eat a bunch of chickens, and eggs, so this is "not my fight" I suppose. I understand that under some value systems this makes sense, but it definitely approaches a danger zone, and I gather there has been some heated discussion of "negative utilitarianism" lately. Chickens don't have future-oriented mental states, perhaps they don't even have preferences at all aside from avoiding pain, in which case this approach could be correct.
I saw a reddit discussion about this and it was suggested that farmers could raise the male chickens for meat, and just don't because they'd have a "surplus" of meat they couldn't sell which ends up going to developing nations, and this outcome was described as being bad because that weakened the local chicken farming economy in those developing nations. That seemed like a rather questionable justification to me, perhaps that ought to be looked into. If I were an activist against factory farming, from a practical purpose I don't know if I'd want to lose "dude they put all the male chickens into something called a grinder a day after they're born" as an argument by just killing them all in-ovo, they just sane-washed factory farming for an outcome that may amount to the same thing.
I'm confused about your objection; even if chickens have subjective experiences, chicken *embryos* probably don't, so I don't see anything wrong with selectively aborting them?
Agreed on the mosquito drones as a visibly interesting approach to a perennial problem. Though…maybe one with some unfortunate consequences. I get that humans don’t output a specific ultrasonic frequency, and aren’t quite as vulnerable to a small electric charge…
Regarding male chickens: I don’t see how aborting male chickens could be any *worse* than the status quo. Even if we assume the zygote gets exactly the same internal experience as the chick would, it just happens sooner.
What exactly is the “danger zone” you have in mind?
On Grant #14 on the list, as second author on the paper (as an advisor on the project that contributed, but was not paid by the grant,) I strongly second that it cannot be evaluated on its own. That's the problem with marginal work promoting changes to complex systems; even if we get HCTs for the next pandemic, or get more of them for existing diseases, you'd need to do an infeasibly complex Shapley-value analysis across tons of efforts and projects with tons of unknowns to attribute part of the impact to this grant.
But, critically, difficulty measuring impact doesn't function as much evidence about extent of impact, and I think that the paper helps shift the burden of evidence for future studies and thereby saves significantly more lives in expectation than the cost. (I'm very comfortable with that estimate, albeit with low confidence about magnitude.) And I'm grateful that grants like this can be made by funders who can appreciate that it's hard to measure impact, and the expected impact is still the critical factor by which to judge grants - even though subjective estimation of that counterfactual impact remains unavoidable.
Regarding RadVac; I initially read Wentsworth's post on LessWrong about "Making Vaccine".
I went ahead did the same, and administered it to ~40 people, none of whom died of Covid (although in expectation none would have anyway). I also took their 40-page whitepaper and made it into a 2-page policy brief, and created an easy-to-follow instruction page for administration.
I also got involved with the RadVac team, and helped them get their proposal in front of high-ranknig government officials in several countries; ultimately none were willing to sponsor challenge trials, but we knew going in it would be a high-variance / high-expectation long-shot.
I think there's something to be said here about "pulling sideways". It was definitely a lot of effort, but there were some relatively-close almost-successes. It has significantly improved my belief that it's possible to make an impact in labor-limited areas.
> "This one is confusing to evaluate; the specific proposal failed, it encouraged its opponents to create a distraction proposal to sabotage it, and the distraction proposal unexpectedly passed, meaning that Seattle did get a more interesting voting method after all (although unclear whether it’s good). Is this a success of our grant?"
Fun fact: this is also how sex discrimination was banned in the United States (poison pill added to the Civil Rights Act of 1964 that failed to peel off enough votes to kill it), so I think you're in good shape calling this a success.
Regarding #6, I was that climate activist who wrote a rock opera, but it really wasn't a climate activism theme. More a mytho-poetic, sci-fi, rumination on love with some inclusion of terraforming snuck into to the plotline. I do like the idea of using music and theater to capture the public's attention to pressing topics like climate change, and truly do appreciate the link to my opera recording.
61 certainly sounds like something I'd participate in. I seem to be unusually good at learning across a wide variety of areas, so I might provide some unusual data.
Hello - founder of Innovate Animal Ag here! Responding in particular to this: "One thing I still don't understand is that Innovate Animal Ag seemed to genuinely need more funding despite being legibly great and high status - does this screen off a theoretical objection that they don't provide ACX Grants with as much counterfactual impact?"
First of all, thank you so much for your support, and for your kind words. ACX has been extremely helpful to us in many ways including to but not limited to funding.
To you, we may seem "legibly great," but many people don't share your perspective. We have a very different approach from most animal welfare organizations in that we work very collaboratively with the animal ag industry to find win-wins - technologies that are good for welfare but also have a value proposition to farmers. Many from the traditional animal advocacy world are skeptical of this collaborative stance, and some even find it heretical. This, combined with the fact that most donors are still mostly interested in companion animals rather than farm animals, means that ~90% of donors who traditionally fund animal welfare wouldn't fund us.
In other words, it's great to have donors who are weird in similar ways that we are weird. It's possible that we could have raised this money from other sources, but this would have taken a lot longer, and distracted a lot of focus from program work. It also may have pushed in directions that are more legible to traditional funders, but ultimately less impactful.
Another counterfactual benefit of the ACX grants is that we hired multiple people either from ACX classifieds, or who read about us through ACX grants. Again, the reason for this is because your audience is similarly weird to us, and is particularly likely to be excited about our unique theory of change.
As a Seattle voter who wanted approval voting and is also uncertain about whether IRV is a net improvement for this specific circumstance: I consider this a successful ACX grant. One of our biggest weaknesses as a civilization right now is status quo bias and specifically a strong bias against experimentation and trying new things. This resulted in a major municipality trying a new thing, which is Good.
About 21 (ALLFED/Morgan Rivers). Morgan is not actively working at ALLFED anymore (which might be the reason you did not get a response), but the project I think this grant was referring to is mostly finished and has resulted in this peer reviewed paper: https://www.sciencedirect.com/science/article/abs/pii/S2211912424000695
Thank you!
Iirc the first round of grants was completely assigned by you, and for the second round you did a partnership with Manifund to do impact markets.
How is that experiment going?
That's a good question. I'm only evaluating grants assigned by me. I'll ask Manifund how they're handling records for their impact market grants.
I'm the PI on number 20, microbial biodegredation of plastics, If anyone want to ask questions or comment, then reply to this comment, and I'll try to check my Substack throughout the next few days.
Hey James! I actually just emailed Scott about this and he pointed me at this post. I have a degree in biotech and I'm super interested. Email me at dydavidyoussef@gmail.com
> they expressed concern it was net negative (!) by taking away oxygen and spotlight from potentially more effective orgs
This style of "good thing is bad actually" argument is extremely common, and my general rule of thumb is to dismiss it out of hand unless it is backed up by a very detailed and specific case.
This manner of argument is not necessarily wrong, just nearly impossible to evaluate. It's a favorite argument of ideologues, who will forever criticize real-world progress as half-measures or corrupt compromises, which their preferred solution of course never would be because it would cut through any political economy considerations with a flaming sword of light.
I agree. There's also the related phenomenon where something that exists in the real world right now, warts and all, is compared to an imagined idea, without any warts.
It might be interesting to mass-outreach non-grantees asking if there was meaningful progress towards their proposed project even without the ACX grant. In trying to predict successful grants, a false positive rate is very useful, but so would a false negative rate as well. You might be able to adjust towards whatever common factors the successful non-grantees and successful grantees have, while biasing against what the failures had in common.
But it seems like you’ve pulled the same lesson that most VCs have. The founding team is everything, and evaluating the idea itself a-priori only goes so far. Many would-be startup founders who aren’t in any Ivy League or FAANG are bitter about how VCs filter for these things, but it’s simply a matter of picking the best predictor for project success. If you couldn’t succeed in the very straight path towards an Ivy League, how can VCs expect you to succeed in the far murkier path of building a startup?
This gives me the idea of a email response time index, where a person’s manic obsession with doing things NOW is rated by how fast they respond to emails of different types of emails. Maybe a chrome extension with a public ranking?
Agree with the second paragraph ("The founding team is everything"), but I have to add than just because a team is great at one sector (i.e. industrial products) doesn't mean that the same team will be great at a different one (i.e. social media, government). Just look at Musk.
Compared to the average Startup founder, Musk has definitely produced like 10,000x the average in a few different industries. A previously successful exit is the #1 indicator for a good VC bet. On the whole Musk has been in charge of more than one seriously successful company, with a good number of failures.
It's never about causation though, always correlation, as what causes success in business is extremely varied and industry specific and often due to luck. His ability to close a $10+ Billion funding round in a single day makes him an excellent bet for essentially anyone though, as the downside risk, of any of his companies imploding due to running out of money is very unlikely.
We should be very careful to assume a market which underperforms index funds is efficient. Also if it were simply filtering on school quality rather than pure in group preference. we wouldn't expect to see such clustering around specific schools rather than school tier. But we do.
It might be that there's just lower margins in VC to be had in general. While it underperforms index funds, it doesn't do so by a lot, and the variation in returns are significantly higher.
I'm not so sure about school tier. Different schools are known for different things. Yale is known for its humanities and pre-law track, while MIT is known for engineering. They will naturally attract people with different interests that are more or less likely to succeed at a startup.
If you admit allocation by school is not a tier system but by school specialty then you're saying that it's not about whether they could succeed at the path but the result of education. Which is a fundamentally different claim from "getting into an elite school shows inherent traits necessary for success."
And that's without getting into how random admission chances are these days. How much difference is there really between someone who can get into Harvard vs Yale? How many people choose not based on preference but which lets them in?
And VC shows a stronger power law effect than indexes. Effectively, all index funds perform the same as their index. But in VC the majority of firms perform far below average while a few perform far above. If you accept this is true (and it is) that means inherently a significant part of the capital is being inefficiently allocated. That is, all the capital going to funds that turn out to not be top 10% (or whatever the cut off is). So drawing conclusions from where 90% of firms choose to put their capital is a bad heuristic.
Of course, part of the reason that happens is discovery of which VC models work, which appears to be heterogeneous, implying there is not a uniform standard but multiple niches. This is the opposite of what you'd expect in a market with straightforward returns to easily trackable prestige. And in fact implies if you're starting a fund you'd want to differentiate by finding signals other firms have missed. Basically finding your own niche.
There may be multiple causes going into the correlation.
Getting into a top-tier school requires the ability to do hard work diligently, and be quite intelligent.
Among them, perhaps different schools select for different personalities and skills. There's no reason to expect the best writer of the high school class of 2025 should be a good startup founder, and they would probably have a good chance of going to Yale, while there's good reason to expect the best software engineer of the same cohort to get into MIT, and they would have significant advantages in building a startup.
Essentially, two layers of selection effects.
> But in VC the majority of firms perform far below average while a few perform far above. If you accept this is true (and it is) that means inherently a significant part of the capital is being inefficiently allocated.
Not necessarily so. It may simply be that the average returns in VC are lower than the market average, or that investors are willing to accept lower average returns for the possibility of much higher returns on the tails. It may be that investors simply enjoy investing in VC more than an index for reasons of prestige and novelty, so are willing to sacrifice some average returns in return for that.
90% of firms don't use schooling to determine where to allocate capital. The top performing firms use that as an indicator for most first-time founders, among many others. I'm not saying there aren't better indicators out there, but the firms who use this one obviously value it, and they have a much larger financial incentive to understand it than you or I. There may not be better indicators than what they're already using, and those top 10% performing VC firms get there due to luck, or due to their ability to better push their portfolio into subsequent rounds.
How would you distinguish this from simple in group preference? The story that VCs tend to favor the schools they went to or feel most comfortable evaluating people similar to themselves?
If VC returns are below market average structurally then the entire industry is inefficient by definition. There's already plenty of ways to risk adjust returns too. And the claim "90% of firms don't use schooling to determine where to allocate capital" seems insupportable to me. You're really going to claim that heavily recruiting from Harvard is uncommon? Or a secret?
I tend to take VCs at their word about what they're doing which is basically they have a thesis they follow. If that thesis consistently includes "people like me should get large amounts of capital to use with ideally minimal consequences for failure" I do not necessarily take that at face value. Especially if doing that tends to underperform.
Another question: a16z and YC both offer a supportive ecosystem and claim that admission to that ecosystem is a large part of why their founders are successful, rather than innate traits, which means who they select is only part of the cause of success. If this is true, and if schools affect who is best suited to certain kinds of work, do you see how that creates a system where two equally talented people might end up in unequal positions? If it's not true then why do elite VC firms and schools (who presumably are in a better position to know) claim otherwise?
I suspect the email response thing is especially susceptible to Goodharting. In the observational case, people respond quickly to emails because they care deeply and are working obsessively on the problem. In the Goodharted example, people respond because they know their response tendencies are being evaluated. In academia many professors rule their labs like a petty dictator and expect prompt responses to emails at all hours of the day; since the rapid responses are coming from a place of fear, it's not at all clear that the terrified students are more effective scientists (probably the opposite, in fact).
FWIW if I had a few of these "good heuristics for grant effectiveness" I might not even publicize them to preserve their usefulness.
On the topic of climate change activism and plays, I've also come across this seemingly odd intersection.
Several years ago (maybe 2019 or so) I was working for a UK eco-home builder that was building a development of zero-carbon homes in the North of England. Someone asked us for an interview because they were writing a play about a brother and sister who were trying to build a zero-carbon home, where the brother was extremely motivated to tackle climate change. At the time I thought it was strange, but clearly there's a pattern.
I've also been to climate protests where Extinction Rebellion's 'Red Rebel Brigade' put on a kind of street theatre. They call themselves an 'international performance activist troupe'. They're the people who dress up in all red costumes that serve as a calling card for Extinction Rebellion marches. You can see how they'd be a natural fit for theatricality and activism. They are both performing to an audience.
Not that surprising given how much art majors tend to skew left. Sort of a "when all you have is an interpretive dance degree, everything, including carbon-emission induced climate change, looks like a problem that can be solved via interpretive dance" situation.
Conversely, people who feel really strongly about most anything are more likely to produce art about it. (Citation needed.)
Hey, Trevor here, founder of Highway Pharmaceuticals. First of all, still very grateful for the grant. The money helped, but the credibility also helped a lot.
Second of all, math isn't right on fundraising. Our/ValueBase's valuation isn't public, but ValueBase has publicly *raised* $14 million. It's probably valued at 3-5x that.
Good point on ValueBase. I won't ask for your nonpublic information, but regarding your company I thought the "valuation cap" on https://kingscrowd.com/highway-pharmaceuticals-on-wefunder-2022/ was the right number to use, am I misunderstanding?
Oh yeah, that was correct as of that date.
"our technology will increase breeding rates by 10-100x, [making cows orders of magnitude more productive] in a couple of years.”
Dumb question: this sounds like this is expecting cows to produce 10-100 calves per year. That's not possible, so what do they actually mean by this?
It sounds like Jeff wants to do iterated meiotic selection (which I proposed in 2022: https://denovo.substack.com/p/meiosis-is-all-you-need ) to speed up the effective generation time.
Thanks, that makes sense.
I submitted an up
The idea of iterated selection goes back really far:
Georges, M., and J. M. Massey. 1991. Velogenetics, or the
synergistic use of marker assisted selection and germ-line
manipulation
Current practice is embryo transfer, in which a cow with desirable traits produces the embryo, which is then transferred to donor. I am told that using this method a cow can produce 30 calves per year.
I think they might mean that because they can breed cows for some positive trait (like milk production) faster, cows' milk production could go up an order of magnitude.
Some quick googling suggests that cows did 10x milk production since the middle ages, but largely due to producing milk year round. Further increase would need to increase udder size and/or number of milkings per day, say 2x udder size and 5x the number of milkings - I guess 1 order of magnitude may be possible but two seems unlikely before it becomes either impractical or unethical.
From what Metacelsus and Banko Killdeer comment, I suspect that what they are referring to by productivity here is the productivity of the breeding process. No doubt the end goal is productivity of actual food production, though.
> Are lobbying organizations a better bet than other types of nonprofit (within the constraints of ACX Grants)?
Yes, development and lobbying are shockingly easy if you just take the time to understand the systems. The system is set up to be responsive!
> One disappointing result was that grants to legibly-credentialled people operating in high-status ways usually did better than betting on small scrappy startups
You admitted you partly selected on this so this conclusion is tainted by selection bias. If you assume two groups have roughly equal percentage of successful people but one group is larger (non-credentialed people) and you likely have worse skill for non-credentialed people by default then selecting a larger share of the smaller group will lead to more successful examples per capita. This is because you'll have more non-credentialed false positives and false negatives in the non-credentialed group as well as fewer false positives in the credentialed group as well as more true positives in the disproportionate credentialed group.
Basically you've rediscovered the social purpose of signaling and concluded that it's better rather than the self-reinforcing mechanism social signaling is. Put another way, you probably included EA credentials as legible because they're legible to you. They are probably less legible to Harvard. This type of thing shows the use is in signaling TO YOU, not the difficulty or generality of the credential, because one of these is literally global and one of them is very niche.
Likewise, any number of niche credentialing organizations create the same effect within their communities because the credential is a signal/coordinating mechanism, not a sign of merit. If it were simply based on merit you'd expect these more niche signaling mechanisms not to work because they'd be strictly inferior to the more global and difficult to achieve ones like Harvard. But instead you see a diversity of credentials with local prestige.
> Someone (I think it might be Paul Graham) once said that they were always surprised how quickly destined-to-be-successful startup founders responded to emails
I think that was Sam Altman (at the time still working in YC):
https://conversationswithtyler.com/episodes/sam-altman/
"ALTMAN: You know, years ago I wrote a little program to look at this, like how quickly our best founders — the founders that run billion-plus companies — answer my emails versus our bad founders. I don’t remember the exact data, but it was mind-blowingly different. It was a difference of minutes versus days on average response times."
Re: climate change, I think spiking demand for energy to power data centers over the next few years is going to require some changes in messaging/strategy to avoid seeming to be standing in the way of progress.
CMV: AI Safety is the Climate Activism of rationalists.
You'll have to explain more about what you mean.
It's mostly cosplay. The game theory of AI means the Sam Altmans of the world will race ahead doing whatever they want while paying lip service to a movement that spends most of it's time spinning up yarns on the internet.
They'll justify maximalist expansion through some combination of fears of China and telling everyone that in the future they won't have to work.
And when that future comes, instead of funding UBI with higher taxes they'll do everything in their power to keep the carried interest loophole and avoid the unspeakable tragedy of paying Clinton era marginal tax rates.
I don't think climate activism is mostly cosplay. US per capita carbon emissions have gone down by almost half even as per capita GDP has increased (see https://www.statista.com/statistics/1049662/fossil-us-carbon-dioxide-emissions-per-person/). This took immense effort, upset a lot of people, and should be considered one of the most surprising and impressive political victories of the past few decades.
I don't think AI safety has been cosplay either. It sounds like your only argument is that maybe AI companies can ignore us; even if that's true, it doesn't make our work "cosplay". But also, most AI companies have safety teams, there's some decent safety research being done, several national governments have AI safety institutes, and there are various AI safety related laws at different stages of completeness. While I agree that our chances are far from certain, I think "cosplay" requires some kind of unseriousness that I deny.
I think the fact that your main worry is whether there will be Clinton era marginal tax rates suggests you're not coming at this from a place of giving a fair shake to our concerns.
I seriously doubt the median climate activist is interested in the technological advances that have lowered carbon emissions, or the fact that we've outsourced a lot of our carbon emissions (and other pollution) to China. The Sunrise Movement spent most of 2021-2022 tying themselves in knots over antiracism nonsense instead of helping Biden pass climate legislation.
An AI safety team to me looks more like a Wall Street compliance department than anything else -- something they do ostensibly to avoid breaking the law but is more geared toward shielding management from liability while making it hard to even *tell* if their breaking the law. Witness how cagey Open AI's CTO was about whether they used YouTube for video training.
The reason I mention Clinton era tax rates is that since 1994 the defining issue for the one percent -- the one thing that they will go to the matt for in Congress, which we see now with the BBB -- is avoid a higher marginal tax rate.
Given the track record, my default position is that talk of UBI is just PR.
Edit: On national governments having AI safety institutes -- the JD Vance argument of "screw that" seems to be winning in the US. Every day I see memes about how the EU is a regulatory disaster.
See this discussion of the claim that we "outsourced our carbon emissions to China" - https://www.noahpinion.blog/p/no-the-us-didnt-outsource-our-carbon . I don't know how you can deny the many carbon use regulations that have impacted every part of life in the US.
I also think you don't understand the role of AI safety teams. They're certainly not "compliance" - so far there are very few regulations to comply with, and many AI companies get along fine without any safety team. If you look at these teams' work, much of it is speculative research about superintelligence - see everything from Anthropic's interpretability research to OpenAI's chain of thought faithfulness and scaleable alignment work. Some of it also has near-term implications but I think the long-term work stands on its own.
Not that it matters, but I think your claim that the "defining issue for the one percent" is tax rates doesn't check out either. The one percent are currently about 50-50 Democrat vs. Republican (see https://catalist.us/whathappened2024/, but this is in the context of a good year for Republicans, and in 2020 it was more Dem-skewed), even though Republicans have consistently been the party of tax-cuts-for-the-rich. I think the top one percent don't really have a single defining issue, but insofar as they do it's mostly culture wars stuff like everyone else.
For all the time and money spent AI safety -- did *anyone* predict that the first casualty would be college students cheating on exams? Have any of the AI safety people suggested legislation or other systemic mechanisms to prevent that? ("Just use bluebooks" is not systemic.) It's like the phones problem. Everyone knows phones are driving us crazy but it's only in the last couple years that individual school districts are doing anything about it. There are more systemic solutions that could be done but everyone knows the lobbying apparatus would prevent it from even getting out of a committee.
As someone who was married to a Chinese woman and spent significant time there, I simply disagree with Noah. It is plainly obvious to anyone who visits China that there is an enormous amount of unreported emissions happening there. Noah has expressed doubts about China's economic reporting in general, so I don't know why he believes them in this one case.
In 2008 the the US Embassy in Beijing started tweeting out daily air pollution readings from their own monitors. They dramatically differed from Beijing's reported numbers (the embassy stopped doing that in March btw).
https://www.pnas.org/doi/10.1073/pnas.2201092119
On the one hand, it did lead China to do something about the pollution, but that was also in part because there were widespread protests (e.g. https://www.wikiwand.com/en/articles/Shifang_protest).
On the other hand, there is no similar mechanism for carbon emissions. Given how we've seen China (not) handle something that directly leads to emphysema and asthma for their own population, why would we expect them to be vigilant about something that doesn't do have such direct effects?
The 50/50 one percent thing is better understood in terms of the source of their wealth. One faction is the Abigail Disney/Christy Walton class -- heirs who donate to do-gooder orgs like the Sunrise Movement and the Hewlett Foundation. It's like the Ford Foundation -- everything that starts conservative ends up liberal. And they've *hurt* Democrats in terms of electability and being able to wield power. That's what "the groups" discourse is about.
Then there's the Peter Thiel/Ken Griffin faction. They are working every day in their business and can see direct, material results from their efforts to influence tax law and regulation.
Open AI, Google, and Meta all spend a *lot* on lobbying, and their AI safety teams can be viewed as an extension of that. "See? We really care about safety so how about we table legislation for now."
Even if I buy the AI safety stuff you're selling, I still think the game theory means most of it will be for naught. When push comes to shove and governments show they're willing to spend significant sums on mass surveillance, hacking of nation states, and autonomous killer drones, those companies will provide a solution. Theres'a reason Palantir's stock has gone parabolic.
And they definitely will not fund UBI.
Liable to remain fringe until one tribe realizes they can hollow it out, wear it as a skinsuit, and build support for an unrelated political battle?
I was in Alberta when an interesting ranked choice election happened. It was for leader of the Progressive Conservatives, which was effectively for the Premier of the province. The top two contenders went attack dog on each other, prompting all their voters to put the third place guy as their second choice. So when neither won a majority the third guy cleaned up on the second round
This dynamic could kill excessive partisanship. In this particular instance the third place guy, Ed Stelmach, was not a quality candidate compared to the other two imo. One of the most confidence shaking moments of my life was when he raised the min wage as oil prices crashed and help wanted signs vanished across the province. The guy had no competition and was the leader of a conservative party in a conservative province. He also didn't just neglect to comment on a 'human rights' commission ruling that banned a pastor from public comment about the gay agenda for life, including emails - he vocally supported it
Presumably though, quality candidates would soon learn that excessive partisanship can hand the election to the guy who got 20% in the first round. This does make me thing approval voting is far better even if ranked is a solid upgrade
What you're describing only happens in Condorcet systems. IRV (Instant Runoff Voting) wouldn't lead to that, as the "third place guy" would get eliminated in the first round
> Someone (I think it might be Paul Graham) once said that they were always surprised how quickly destined-to-be-successful startup founders responded to emails - sometimes within a single-digit number of minutes regardless of time of day. I used to think of this as mysterious - some sort of psychological trait? Working with these grants has made me think of it as just a straightforward fact of life: some people operate an order of magnitude faster than others.
I know a few people who respond to emails quickly but not very well, and I also feel like this sort of “respond to emails quickly” mentality lines up well with people who don’t take their work-life balance very seriously either—which is probably a good trait to have if you want to be a startup founder.
I’ve been through periods where I respond very quickly and periods where I need reminders to respond. It is absolutely a window into mindset, energy, motivation, dedication and prioritization.
Someone who responds quickly is on top of things. Someone who doesn’t is mentally overwhelmed, distracted, disorganized or depressed.
Very impressive to read all the good things that came out of that, thanks to everyone involved for making the world better.
Isn't IRV strictly better than Approval Voting? This just seems like a bigger win than expected.
Noooo, don't you dare start this again!
Again?
Where can I find the previous start of this?
The case for ranked-choice over approval: https://fairvote.org/resources/electoral-systems/ranked_choice_voting_vs_approval_voting/
The case for approval over ranked-choice: https://electionscience.org/education/approval-voting-vs-rcv
I once attended an online debate (hosted by an EA group) between representatives of these two orgs, but I don't think it's online anywhere.
See also https://xkcd.com/1844/.
There's a lot to like here, the anti-mosquito drones in particular look like they'd have both public-health utility and market potential. The AI safety advocacy probably dwarfs anything else going on here in terms of magnitude, it's just hard to know what actions are actually going to be relevant so the existence of advocacy groups that can pivot as the industry changes is really the only way to handle that. (I don't have any background in grants, and am terrible at writing and funding them, and have no worthwhile opinion on whether you should keep doing this, but I do think you continue in some fashion to try to place people in key policymaking spots ahead of the big AI decision points.)
I'm somewhat baffled that the animal welfare team convinced farmers to selectively abort all the male chickens. I eat a bunch of chickens, and eggs, so this is "not my fight" I suppose. I understand that under some value systems this makes sense, but it definitely approaches a danger zone, and I gather there has been some heated discussion of "negative utilitarianism" lately. Chickens don't have future-oriented mental states, perhaps they don't even have preferences at all aside from avoiding pain, in which case this approach could be correct.
I saw a reddit discussion about this and it was suggested that farmers could raise the male chickens for meat, and just don't because they'd have a "surplus" of meat they couldn't sell which ends up going to developing nations, and this outcome was described as being bad because that weakened the local chicken farming economy in those developing nations. That seemed like a rather questionable justification to me, perhaps that ought to be looked into. If I were an activist against factory farming, from a practical purpose I don't know if I'd want to lose "dude they put all the male chickens into something called a grinder a day after they're born" as an argument by just killing them all in-ovo, they just sane-washed factory farming for an outcome that may amount to the same thing.
I'm confused about your objection; even if chickens have subjective experiences, chicken *embryos* probably don't, so I don't see anything wrong with selectively aborting them?
Agreed on the mosquito drones as a visibly interesting approach to a perennial problem. Though…maybe one with some unfortunate consequences. I get that humans don’t output a specific ultrasonic frequency, and aren’t quite as vulnerable to a small electric charge…
Regarding male chickens: I don’t see how aborting male chickens could be any *worse* than the status quo. Even if we assume the zygote gets exactly the same internal experience as the chick would, it just happens sooner.
What exactly is the “danger zone” you have in mind?
On Grant #14 on the list, as second author on the paper (as an advisor on the project that contributed, but was not paid by the grant,) I strongly second that it cannot be evaluated on its own. That's the problem with marginal work promoting changes to complex systems; even if we get HCTs for the next pandemic, or get more of them for existing diseases, you'd need to do an infeasibly complex Shapley-value analysis across tons of efforts and projects with tons of unknowns to attribute part of the impact to this grant.
But, critically, difficulty measuring impact doesn't function as much evidence about extent of impact, and I think that the paper helps shift the burden of evidence for future studies and thereby saves significantly more lives in expectation than the cost. (I'm very comfortable with that estimate, albeit with low confidence about magnitude.) And I'm grateful that grants like this can be made by funders who can appreciate that it's hard to measure impact, and the expected impact is still the critical factor by which to judge grants - even though subjective estimation of that counterfactual impact remains unavoidable.
Regarding RadVac; I initially read Wentsworth's post on LessWrong about "Making Vaccine".
I went ahead did the same, and administered it to ~40 people, none of whom died of Covid (although in expectation none would have anyway). I also took their 40-page whitepaper and made it into a 2-page policy brief, and created an easy-to-follow instruction page for administration.
I also got involved with the RadVac team, and helped them get their proposal in front of high-ranknig government officials in several countries; ultimately none were willing to sponsor challenge trials, but we knew going in it would be a high-variance / high-expectation long-shot.
I think there's something to be said here about "pulling sideways". It was definitely a lot of effort, but there were some relatively-close almost-successes. It has significantly improved my belief that it's possible to make an impact in labor-limited areas.
> "This one is confusing to evaluate; the specific proposal failed, it encouraged its opponents to create a distraction proposal to sabotage it, and the distraction proposal unexpectedly passed, meaning that Seattle did get a more interesting voting method after all (although unclear whether it’s good). Is this a success of our grant?"
Fun fact: this is also how sex discrimination was banned in the United States (poison pill added to the Civil Rights Act of 1964 that failed to peel off enough votes to kill it), so I think you're in good shape calling this a success.
Hey! Luca De Leo here from the 2021 round. I don't think I ever got an email asking for an update but I just filled in the form at the top.
Thanks a lot for everything you do!
Im the founder of Spartacus.app, the conditional commitment platform and 2024 grantee.
Just wanted to thank Scott again for the help in getting the project off the ground.
We still have an opening for a summer intern!
Dm me if interested.
Regarding #6, I was that climate activist who wrote a rock opera, but it really wasn't a climate activism theme. More a mytho-poetic, sci-fi, rumination on love with some inclusion of terraforming snuck into to the plotline. I do like the idea of using music and theater to capture the public's attention to pressing topics like climate change, and truly do appreciate the link to my opera recording.
61 certainly sounds like something I'd participate in. I seem to be unusually good at learning across a wide variety of areas, so I might provide some unusual data.
This is fantastic! It's inspiring to get this update about ACX grant's support of rational progress. Thank you.
Hello - founder of Innovate Animal Ag here! Responding in particular to this: "One thing I still don't understand is that Innovate Animal Ag seemed to genuinely need more funding despite being legibly great and high status - does this screen off a theoretical objection that they don't provide ACX Grants with as much counterfactual impact?"
First of all, thank you so much for your support, and for your kind words. ACX has been extremely helpful to us in many ways including to but not limited to funding.
To you, we may seem "legibly great," but many people don't share your perspective. We have a very different approach from most animal welfare organizations in that we work very collaboratively with the animal ag industry to find win-wins - technologies that are good for welfare but also have a value proposition to farmers. Many from the traditional animal advocacy world are skeptical of this collaborative stance, and some even find it heretical. This, combined with the fact that most donors are still mostly interested in companion animals rather than farm animals, means that ~90% of donors who traditionally fund animal welfare wouldn't fund us.
In other words, it's great to have donors who are weird in similar ways that we are weird. It's possible that we could have raised this money from other sources, but this would have taken a lot longer, and distracted a lot of focus from program work. It also may have pushed in directions that are more legible to traditional funders, but ultimately less impactful.
Another counterfactual benefit of the ACX grants is that we hired multiple people either from ACX classifieds, or who read about us through ACX grants. Again, the reason for this is because your audience is similarly weird to us, and is particularly likely to be excited about our unique theory of change.
As a Seattle voter who wanted approval voting and is also uncertain about whether IRV is a net improvement for this specific circumstance: I consider this a successful ACX grant. One of our biggest weaknesses as a civilization right now is status quo bias and specifically a strong bias against experimentation and trying new things. This resulted in a major municipality trying a new thing, which is Good.