“I don’t consider us to still be dating” - Brutal!
What an interesting job. Never having considered it before, I would look at their previous work and see how much of the grants were wasted on excessive management fees and bureaucracy instead of actually going on what they were intended.
If you stumbled backwards into the concept of a VC field, I think it's plausible that you might do well stumbling further into "incubator" territory. A lot of your issues seem to stem from idiosyncratic persons who haven't written a grant proposal in their lifetimes, especially not a successful one. The shoe-string version could be a virtual session and a used copy of a proposal writing textbook mailed to the individual and 3 months before opening the door. Or you could consider something more personal and intense, Y-combinator style, or anywhere in between.
Why not simply judge whether or not you’d be very happy to fund each grant, and then just randomly choose grants to fund from among those? Much simpler to administer, avoids tying yourself in 4D knots, and the losers of this process can then be publicly posted with no negative signaling risk.
>VC-ing is a field as intense and complicated and full of pitfalls as medicine or statistics or anything else.
Here's a really interesting article by Tucker Max about why he got out of venture capital.
Short version: it takes way more time than you'd think. To be a successful angel investor, you pretty much have to know your founders' businesses as well as they do themselves.
"Even though angel investing looks like this casual, easy and fun activity, make no mistake about it, if you want to avoid losing your shirt, you spend a LOT of time on it: finding deals, vetting companies you’re interested in, and then once you invest, working with them like hell to make them succeed.
Just one example: I invested in a custom dog toy company, PrideBites, and have probably spent at least 500 hours over two years learning about the dog toy space, the dog retail space, and the complexities of Chinese manufacturing and logistics (so I can better advise them). Not to mention, another 500+ hours I’ve spent with the team helping them through all the hundreds of issues that come up. [...]
That’s almost a full time job–and it’s only ONE company. "
(yeah, that's the same Tucker Max who used to write stories about getting loaded on Everclear, fucking midget strippers, and having diarrhea in hotel lobbies. He's had some life changes.)
Re: the "impact certificate-based retroactive public goods funding market" -- you may want to check out social impact bonds if you haven't already: https://golab.bsg.ox.ac.uk/the-basics/impact-bonds/. It sounds like a somewhat similar concept, albeit without the prediction market component. It does, however, have the advantage of having been implemented in practice.
You mention Tyler Cowen a couple of times, but one nice lesson from Stubborn Attachments, is that when uncertainty abounds, look for the option with the largest upside, which rises above the froth, where upside is defined from an optimistic perspective. Interestingly, this ties in to a core idea in machine learning 'optimism in the face of uncertainty'. The one sentence claim there is that, if you are optimistic and try something either 1) everything works and great things happen or 2) everything goes wrong, but you learn a lot which you can use to update your world model for future grants. You can also update everyone else's models, when they see which funded projects succeeded.
Re: grant-writing, for my fellow scientists out there, the best advice I ever got was to focus on "knowledge gap". You have to articulate what is the thing we don't know that you are going to figure out. Then you explain how you will figure it out. This isn't like some essay where the overall concept builds over the course of X pages; you have to articulate the knowledge gap explicitly enough that it's crystal clear to someone reading ten grants in one day. You can even put the pivotal sentence(s) in italics for extra emphasis.
I tell this to you so that if your ideas are better than mine but your writing is worse, you can beat me fair and square.
I think that making ballpark guesses might have been preferable. Once you rely on others, you might end up with a selection by committee that looks pretty similar to what existing orgs are already doing, except that they have more experience doing that. But your own perspective can't be replicated by others.
The bigger problem is simply that there were too many proposals to sift through. Maybe a shorter word limit could have helped, as you can always request more info later.
I wonder if it would be reasonable to require some small payment (e.g. $20) to apply. The proceeds could go to charity or to increase the grants pool, and it could reduce the nonserious applicants by a lot. Though I also expect there'd be some good applications that would be put off by this, more negative feelings by people who aren't picked, and maybe even some legal issues...
There may be a local Community Foundation or Charitable Foundation in your area. This is an entity that allows people to set up their own charitable funds under the legal umbrella of the main entity. These can be funded in various ways and operated in various ways, including donor directed. That might be an answer for your money transfer issues.
"This grants program could be the most important thing I’ll ever do."
Almost certainly not. You have a proven ability to write outstanding, insightful blog posts that get thousands of exceptionally smart people thinking, some of whom will be influenced to do important good things (or stop doing important bad things).
What are the chances that you're *also* astoundingly good, or even pretty good, at administering grants programs? Small.
Stick to your knitting. Find someone else who's good at the grants stuff and get them to do it.
"Another person’s application sounded like a Dilbert gag about meaningless corporate babble. “We will leverage synergies to revolutionize the paradigm of communication for justice” - paragraphs and paragraphs of this without the slightest explanation of what they would actually do. Everyone involved had PhDs, and they’d gotten millions of dollars from a government agency"
As they say, the answer is in the question. They're accustomed to writing proposals for government grants, which means they have to shovel in all the buzzwords to show they're hitting the goals for which the grant is established.
My job before my last job was like this; a new thing that came in was a monthly report from every centre about hitting a bunch of goals set under targets listed in separate sections in a newly-created overarching statement of achievement.
What it meant in reality was "what did we do this month?" "well, a bunch of the same stuff we do every month" "okay, pick out three things, then trawl through all the bumpf and pick out headings to say we did this, that and the other", which I then wrote up with copious chunks of the jargon scattered on top.
You couldn't say "this month we recruited six new members for our programme", you had to show how this came under Goal No. 15 of Section 4 of Unit 12. It was feffin' stupid, everyone involved knew it was feffin' stupid, the ideal was that every centre was sitting down for a meeting at the start of the month to plan out our New Goals For This Month but in actuality it was "oh crap, the monthly board meeting is in three days time, quick, write up a report about our New Goals to be presented there" and I was one of the people just pulling stuff out of - the air for such reports. But it filled the real aim of the exercise, which was "here is a Thing we are supposed to be doing, and now we are reporting that we are indeed Doing The Thing".
Bureaucracy, what a wonderful thing!
I am once again reminded about how amazingly backward the USA is in something as basic to a market economy as 'making payments to people'. And I guess the entire world, to be fair. It's just surprising that a country with an image of itself as such a capitalist Mecca would be so bad about it. Or maybe the fact that the payments systems are so heavily gatekept is the lesson in itself?
> I take it all back. The crypto future can’t come soon enough. Sending money is terrible.
We could also just change the regulations or use various interventions to actually get it up to standard. FinTech in the US isn't as bad as some places but there are places that totally eat our lunch. The Chinese have managed to do better without using crypto. Or the Koreans if you want a democratic version.
> Big effective-altruist foundations complain that they’re entrepreneurship-constrained.
They're wrong and/or lying. The incentive of funders is to encourage people to apply. It makes them look more selective and puts them in the position of rejecting rather than soliciting. Sometimes they find a gem and they're all very adept at tearing through large numbers of applications. A bad application can be quickly dismissed by some admin or another. Further, people looking for funding will generally do a circuit where they apply to dozens or hundreds of places.
The other two (an advantage in getting funding and an advantage in evaluating applications) are the important ones. As you've discovered, there are people who optimize around being fundable to grants. It's practically a career and there are entire industries of consultants. The real advantage would be having some ability to identify people who can accomplish the goals of big charities but don't look traditionally fundable. The people who get rejected by everyone else. The issue with this is that it's hard. The thing about Harvard is that 90% of Harvard MBAs are going to be a solid B. A few might be A+s. But B- is as low as most will go. The general population can range from A+ to F's and you need to find a way to figure out how to determine it a scale.
Anyway I've never ran a grant program but I've definitely worked as a judge. Will it make you feel better if your experience sounds typical? I'd say applications usually break down something like:
50% so awful they have no chance
30% that are okay but definitely below the cut
5% that are good or great but just not a good fit for this particular grant/program
10% that are good but not great. A few of these get through if they fit a particular need.
5% that are really great. These are your pool where every cut hurts.
And then the occassional, rare slam dunk that's definitely getting in. But these are rare enough you often get 0 in a round.
If you have to adjust the numbers: subtract from the good ones and move it into the worse tiers.
(How does this square with not being entrepeneur restrained? Most grant programs have tiny, tiny acceptance rates. Let's say you have a pool that's only got 7% good-great applications. If you have 1,000 applications that's 70 good ones. If it's a $50,000 grant program that's $3.5 million. More money than you probably have. If you have $1.5 million then you're accepting about half. But you're funding constrained, not entrepeneur constrained. Still you're encouraged not to think of it that way for the reasons above. Besides, you really want to have all top 1% applications so you can convince yourself the 5% was a marginal application.)
PS: I'm curious to hear about this AI charity stuff. I'm got connections to the non-profit and such world and I've never heard of these. But I might be on the wrong coast?
"There wasn't as ready-made an EA infrastructure for biology, so I jury-rigged a Biology Grants Committee out of an ex-girlfriend who works in biotech, two of her friends, a guy who has good bio takes in the ACX comments section, and a cool girl I met at a party once who talked my ear off about bio research. Despite my desperation, I lucked out here. One of my ex’s friends turned out to be a semiprofessional bio grantmaker. The guy from the comments section was a bio grad student at Harvard. And the girl from the party got back to me with a bunch of detailed comments like “here’s the obscure immune cell which will cause a side effect these people aren’t expecting” or “a little-known laboratory in Kazakhstan has already investigated this problem”.
There's a lot that's very interesting here, but one confusion I see throughout is
- are you funding a CAUSE or are you funding a PERSON?
Because (the way I see it, anyway) at this small money level, you are very much funding a person, not a cause. (You can only claim to be funding a cause independent of people when there's a whole infrastructure of multiple people involved...)
And that simplifies the problem tremendously. It doesn't matter how great the cause appears to be if the person charged with implementing it is incompetent, deluded, fraudulent, naive, or all the other various relevant pathologies. So you can immediately weed out everyone who gives you a bad vibe in their application, even if you can't quite put a finger on it.
Depending on exactly how many grants are left after you weed out
- person I simply do not believe can do the job with the money AND
- cause I do not care about enough to fund
my next filter would be, is there anyone, anyone at all, in the list who shows any proof of work ability of anything, anything at all?
This sounds harsh, so the question is what is the goal here? If the goal is "give out money and help a few people", well, credit card debt in the US and medicine for Africa. If the goal is "actually *achieve* something with high leverage", how about starting with someone who has achieved something in the past?
Now we get to the contentious area of "how many people are actually capable of doing anything whatsoever" where, uh, let's say, opinions differ widely. But I would say, based on my limited experience of either seeking a job, helping others find jobs, or helping others hire people, that the easiest thing in the world (for the actually competent) is to show proof of their competence:
- you want a computer job? Give a link to a great program you have written.
- you want a graphics design job? Show an imaginary campaign that you created.
- you want an electronics job? Show an interesting project you created.
These are not very high bars. And yet, 95% of people cannot clear them. This doesn't make them non-citizens, or inadequate drones. But it does mean that they are interchangeable, non-special people. They simply do not have the spark of drive and originality that makes them capable of just leading projects (even if "leading" means "do your own work without daily oversight and supervision") let alone creating something truly new and taking it to completion.
So, unfair though it may seem, that's what I would demand from an applicant -- *proof* that you can *achieve* the task (or at least make a good effort).
This is, IMHO, what Peter Thiel is doing with his infamous question (“What important truth do very few people agree with you on?”). The point is not the answer, it's to show that there is some degree of originality in this person's thought. There are multiple ways of getting to the same point, but every one of them boils down to
- I know you claim to be able to achieve this, and
- you may even have a credential (or reference or whatever) to that end, but
- talk is cheap, and many credentials are a lot easier to obtain than they should be, so
- SHOW ME something relevant to your supposed ability to achieve the goal.
And if the person cannot do that (believe me, you will hear an endless stream of justifications about how 'been so busy with school', 'never have time to think', 'my deprived childhood', blah blah. All of which may or may not be true, but very much do indicate that
- the person (unlike the obsessives who achieve things) does not prioritize this task above everything else, and does not think about it night and day, every night and day; and
- that they don't even see anything wrong with this (which means they will neither be able to achieve the task themselves, nor have any competence in hiring/working with those who might take up the slack).
There are now several people in or known to the rat-sphere who have done a microgrants program. Can they create a document with lessons learned so that the next person who tries it can have something to go off of?
Regarding the future version of this, I hate to be the crypto guy but I think it would be a great fit, though I would make the tokens fungible.
The way I would do this is - you make a platform for submitting proposals. Then, for the ones you’ve approved and committed to(or anyone else who wants to for that matter), we run a token sale, where people buy the proposal’s token for dollar-backed stablecoins, with a threshold so that if the project doesn’t get enough funds to execute everyone gets their money back.
To keep the math simple let’s say we’ll issue the same number of ProposalTokens as the number of dollars committed. Then at the end of a successful project, everyone with ProposalTokens can exchange them for the corresponding amount of dollars.
This way, you don’t need a single investor to cover the whole amount, it can be crowdfunded. Furthermore, there will be a market for each project’s tokens, and the price of the ProposalToken would function exactly like a prediction market for the success of the project.(And implicitly the trustworthiness of the commitment, unless you want to lock up the rewards ahead of time)
You could even make it so that the team that’s working on the proposal cannot sell all of their ProjectTokens at once, but instead have to do it in batches over time. This way, they are incentivised to keep everyone updated about their progress, so that they can raise more funds at better valuations.
The only thing is though, there would have to be some upside for the investors to lock up money for a year, some gap between what’s required to execute the project - I guess that’s the price of doing this retroactively. Or maybe just the fact that you’re able to help a charity for $0 is enough?
I’m sure this is along the lines of what you’re thinking already, just wanted to share my 2 cents
Austin from Manifold Markets here; I've been thinking about this problem from the opposite perspective, in terms of the opportunity cost that grantseekers pay to navigate the EA funding landscape (my back-of-envelope estimate for Manifold was $3k in time spent). My own proposal was to consolidate the different kinds of funding applications into a single "Common App": https://blog.austn.io/posts/proposal-a-common-app-for-ea-funding
Ideally, this platform would also allow grantmakers to better coordinate on which projects they want to fund, and allow new microgrant creators to easily get started (I saw that Scott Aaronson and Nuno Sempere both started microgrant programs patterned off of ACX).
An impact certificate model also sounds like a great idea! If Manifold's infrastructure or technical expertise would be useful, let me know (email@example.com) and I'd be happy to help.
I heard this at a much higher level a few years ago at a conference. He was a CEO that sold his company for some crazy amount of money and turned to philanthropy and was shocked at how difficult it was. Turns out that if you are giving away money and want to do it well (especially at scale) it calls for an organization and operational expertise. He since started spending his time working with similar folks who wanted to set up something like this but didn't know how to get it started. I never had thought about it before but when you think about it it make sense. The incredible weight of donating $100M but doing it "right" seems high.
Well I think you did a great thing. And there is enough tail uncertainty that it’s possible that you funded something albeit with a low chance of very high impact like VC. There is a lot of chance. And while eg a malaria charity has a pretty certain outcome, these grants might plausibly have higher or at least very different type of expected outcome. I think at the margin more grants probably enriches the giving ecosystem which is a positive.
> Because you have a comparative advantage in evaluating grants
Cackles maniacally: https://forum.effectivealtruism.org/s/AbrRsXM2PrCrPShuZ
Thanks for writing this. It gives me an appreciation for why so many philanthropists choose to spend their donations in ways that are perhaps less effective but a lot more predictable. If you buy a new building for the local university then it's almost certainly not the most effective way of spending that money, but at least you know what you're getting.
I guess I have the same problem with Effective Altruism that I do with utilitarianism; it's easy to say in theory "just do whatever creates the largest number of utils, duh" but this is a heuristic you can't possibly apply in practice, so you're back to square one.
[self-deleted due to being in poor taste]
Let the big boys fund that stuff, no individual has enough bandwidth. If you want to beat the market, such as malaria etc, then you have to get directly involved. Pick someone that you connect with and aligns with your values, dig in deep with them, and make sure they don't get hung up on something you can prevent. Sometimes that will be money for the right thing, but the biggest value will come from holding back when you see that something would be counter productive or wasteful.
What you're doing seems like a good idea at first but can't really be better than randomly handing money out to winos on the corner.
I know it was probably heartbreaking in the moment, but I burst out laughing at "This remains the most stone-cold rejection I have ever gotten." Of all the consequences of a grants program I wouldn't have expected that one.
I’m really excited about the prediction markets-themed grant-making proposal! Wouldn’t it be more fun to open the initial round of investment (the owning of the impact certificates) to other ACX plebs like me who don’t have $250,000?
So you're saying Molly Mielke's Moth Minds may make microgrants more manageable?
Really liked your idea for funding grant proposals. Been reading up on prediction markets since discovering them here. Seems like there’s a need to do a dance for regulators to say “See?!?! We’re not gambling! We are just having different opinions about future events, recording those opinions, with different rewards for success!” Wondered if there was a way to do multi-year trading on those and maybe something to turn on a trickle of funding and build up as trust increases (based on -and this will do a lot of work- “some kind” of review).
I’ve often wondered if something similar to this could be done on a smaller funding scale, ie the city of Los Angeles does this for odd jobs and over time we fund and trust people to deliver sandwiches to the homeless or fill in pot-holes or even have the job of finding new odd jobs. That seems like a cheaper way to administer a city and a happier way for people to find something to do when they don’t want to be chained to a company.
- This was awesome for you to do and publicize, sorry you didn't enjoy it
- It's your money, you can do what you want with it
- You really do have the advantages you describe in section VI
- Doing your own grants as opposed to just donating everything to established EA orgs provides valuable hedging / information to the ecosystem
... but can we talk about Grant B?
ACX Grants gave an established academic $60k to jet around the world writing a book on a super trendy, politicized, non-quantitative subject.
One, that's not a long-shot project; it's a project that's not even trying. Even if there was One Weird Trick to Smash Patriarchy (which there isn't; that's "murderism" talking) this isn't the kind of work that would find it.
Two, this is a clear case where ACX Grants have zero comparative advantage. "Harvard degree" has nothing on how legible this recipient is to mainstream grant-making institutions, _and_ Tyler Cowen already wrote her a check.
Look, admittedly I also dislike that the recipient considers my presence as male in tech to be ipso facto problematic (https://www.draliceevans.com/post/smash-the-fraternity) but that isn't where my objections are coming from. I'm just disappointed that the ACX Grants program gave such a large chunk of funding to such a lackluster cause. And given the thought process described in this article I'm confused about how it happened, unless the decision was just outsourced to Tyler Cowen.
You recently wrote a post about why your posts aren't as good anymore. One of your reasons is that you're focusing too much on the community stuff.
This is a community stuff post.
I think if I assign some very arbitrary rating to how interesting one of your posts is, and some very arbitrary rating to how important it is, and multiply, that this will come out as the very best post you've ever written. You've lately done big work that isn't this cool, you've written cool stuff that isn't this big, but in terms of how good a thing you have going for? This is something great.
About second order consequences: saying that you gave X$ against malaria is nice but is, at least for me, easily forgotten, and probably doesn't lead to blog posts like this. This grant program, however, is itself a form of publicity. It shows that you and the people around you are ready to invest a lot of time in those kind of things, which, in my opinion, makes you appear way more serious about everything "effective altruism". Of course, evaluation that will be very hard (or will it? Maybe a poll could be a start), but I think there's something there.
While reading over this, I had the thought, maybe one that you already had in the process of running the grants program, that if I were running one, I think I would aim to prioritize funding charities and programs which will continue to exist and have some plausible means of making realistic assessments of how much good they're accomplishing over time. If you fund 50 charities, and only three of them turn out to be much good, that might still end up leading to better outcomes than just donating all that money to the Against Malaria Foundation, *if* the process allows you to discover that those charities are particularly worthwhile, so that you and other people can direct more funding to them later.
I think this encapsulates the same idea you expressed in your essay on Diversity Libertarianism. Trying more things, rather than a few known-to-be-good things, can be preferable if you have a process to iterate on the things you try which turn out to be particularly good. But, it's a lot less likely to be if you don't.
I continue to be amused that the difficulty of operations keeps surprising you. You were surprised that the Meetups Everywhere project turned into such a recordkeeping headache. You were surprised at the logistical complications of running the Adversarial Essay and Book Review contests. And now I hear you were surprised by the challenges of paying thousands of dollars to dozens of parties. I’m glad in all these cases that you find someone after-the-fact to help rescue you from the administrative quicksand, but I’d think by now you’d be better able to foresee the need before you start these projects!
I'm a CPA, and you should probably talk to a tax lawyer if you haven't already. The US has a Federal estate, gift, and trust tax that may kick in if you give more than a couple million over your lifetime. It may be yet another continent of angry cannibals.
It seems like my "let Scott Alexander handle the busywork" approach to micro-grants is not sustainable, then?
As an academic scientist with a lot of (mostly negative, with the remainder mostly puzzling) experiences applying for federal grants, there's a lot of interest here. Federal agencies tend not to describe their grant-reviewing experiences with such honesty.
I have a few remarks on lesson 4:
> and then my grants program would get really famous for funding such a great thing
I know some programs that veered very hard in this direction in an ostensible attempt to become more established and build their reputations. I caution against this, because people can tell when you're just going for name association rather than actually doing anything worthwhile.
> Or suppose some promising young college kid asks you for a grant to pursue their cool project.
I think Tyler would also recommend considering the marginal impact of your grant dollars. Giving somebody their first chance has a much larger potential upside than funding an existing effort.
Finally, one point that isn't emphasized here: especially when it comes to basic research, being afraid of "failure" (in the sense of a project not being successful) is counterproductive. If anything, basic research grants should be targeting a certain failure percentage, or else they won't fund enough novel ideas with potentially huge long-term payoffs. (This Works in Progress essay discusses a related idea, "Demanding null results": https://www.worksinprogress.co/issue/escaping-sciences-paradox/.) Of course, for some of the AI or x-risk proposals here, "failure" could have significant negative externalities, which is different than not finding a new drug and has to be handled more carefully.
Fantastic post, thanks for writing it all out !
>Church has seven zillion grad students, and is extremely nice, and is bad at saying no to people, and so half the biology startups in the world are advised by him
In fact, the conflict of interest section on papers he's on would be too long, so he made a webpage to list them and just links to that.
(See also: https://twitter.com/ggronvall/status/991300734774923264)
Here's a third option between starting a micro grants program and donating to an EA powerhouse 501c3. Go to a public school with an impoverished population. Ask them what they need. Find a local 501c3 that can fulfill that need. Put them together and ask for a plan. if you have confidence in the plan, the people, and the partnership, fund a pilot. If the pilot works, go from there. You will have created something that BUT FOR you would not have happened. Of course, I'm making it sound easier than it really is. And you need to be choosy and lucky. But inspired principals and inspired executive directors of smaller, local charities are out there. Waiting to be connected and a need funded.
One thing that stands out to me is how much you benefited from connections and knowing people.
Makes me wonder if one of the most effective things one can do is simply to promote schmoozing amoung EA types. I know that since Oxford stopped their EA lectures I don't know where to go in academia to do that and I wonder if it's a broader problem.
If I had the time/management skills I'd submit a grant request somewhere just to do a continuing EA lectures series in some fancy philosophy/CS/math department.
Having recently started making donations – not microgrants, just donating to established organizations, but attempting to identify underfunded groups where incremental dollars will make the most difference – this is so spot on.
The question I've especially been wrestling with is how to understand whether a particular organization actually needs your dollars, and how many of them, considering that they will also be raising from many other donors. It seems like a fundamentally intractable problem when a large number of donors / funding sources are attempting to make independent decisions (which is mostly the only option available under the current system). I wrote up some thoughts about the problem here, would love feedback: https://climateer.substack.com/p/philanthropy.
> The problem is: this grants program could be the most important thing I’ll ever do.
No, it's not. The chances that your $60K are going to be the difference between utopia and a thousand years of darkness are negligible. In terms of strict value for the money, you'd be better of finding six random hobos and giving them $10K each. However, this is the classic tragedy of the commons: giving $60K to hobos is the rational choice, but if everyone took the rational choice, we'd still be stuck trading slaves for coconuts, instead of flying drones on Mars. So, yes, what you do is important... just not so important that you should dedicate your entire life or reputation or fortune to it; nor is it important enough to endlessly obsess over. Half-assing the job is probably the right move.
Scott's mentioned retroactive grants mechanisms are very similar to what xprize are doing https://en.m.wikipedia.org/wiki/X_Prize_Foundation
I mentioned it in a comment yesterday on the polymarket post. Seems even more relevant now. I think connecting the two concepts and persons would be very beneficial
Re: Corporate Babble. Once I was reading a forum, and someone posted a thread asking for feedback on their pitch document for something related to the forum's topic. I took a brief look and told them that it looked like buzzword soup instead of actual information.
I expected them to be upset with me. I figured that someone does corporate babble because either:
(A) they believe it works better than actually explaining stuff, in which case their first thought will be something like "that's on purpose, dumbass", or
(B) they don't actually KNOW the information that they are pretending to explain, in which case they will be embarrassed to have been caught in what is effectively an act of fraud, and will likely try to bluster their way out
So I was rather gobsmacked when they replied with something like "yeah, that's a fair criticism; how can I improve on that front?"
(And I had no freaking clue what to tell them! I don't have any models for how corporate babble happens as a locally-correctable error! To this day, I'm honestly not sure whether what they really meant was "how can I *disguise* that better?" I guess I probably should have asked follow-up questions at the time...)
Having received a grant, reading this made me really anxious. Internally comparing the outcome of your project (even in the best case scenario) with X lives saved surely creates pressure. Idk whether it is right to feel this way and actually all of one`s expenses should be weighed in this way (eg, buying a new car or saving 4 lives?) as to put things into perspective or whether this in the end creates a dysfunctional amount of anxiety, actually lowering your chances of success. Same reasoning should apply for the grant selection process I suppose.
Scott writes: “How can big foundations be short of good opportunities when the world is so full of problems? This remains kind of mysterious to me”
I believe I can solve that one for you/add some reasons besides the one you give.
My reference is to government development cooperation, not charity-based altruism, but the problems are bound to be similar. (Actually, I think Scott knowns the reasons very well, but acts ignorant to come across as modest and not an arrogant know-it-all. Which is a very sympathetic character trait.)
If you want to solve the problems in the world, you run into two practical problems:
1) The people with the largest problems in the world do not have any organisations, or anything else, you can “attach your money” to. That is one of their problems. So you have to rely on middlemen.
2) Money aimed at solving other people’s problems where you do not want anything tangible in return, attracts a lot of fake middlemen that mimic the behaviour of real middlemen. You try to screen the middlemen to find the real ones, but this is a signals arms race, where fake middlemen are incentivized to improve their mimicking behaviour as you develop better screening abilities. It’s a principal-agent problem. Everything is. (As your blog post illustrates.)
To illustrate. You want to solve the problems of the people with the most problems in the world? They are actually easy to identify. 1) go to a low-income country. 2) Go to a rural & remote area in the country. 3) Locate a minority ethnic group in that area. 4) Locate the single mothers/abandoned wives/widows in that ethnic group. 5) Locate the ones with daughters. 6) Narrow in on the daughters with disabilities. Those daughters are the people with the largest problems in the world.
So how do you reach them to help them solve their problems, assuming they are still alive? (Putting a parenthesis, for the sake of argument, around the fact that they are seldom around for very long, which is one of their problems.)
You cannot reach then directly, since one of their problems is they have no way you can reach them directly. You have to rely on locals who know the community, preferably someone that are part of it. Usually, it is a problem that the community itself is so destitute they have no-one you can reach, so you have to rely on some not-fully-local, or more likely a whole chain of further-and-further-away middlemen from the daughters themselves, i-.e. a chain of intermediate agents, each with their own screening-and-signalling game attached. They are usually NGOs and CBOs of some sort.
It is enough that there is a bad apple in one of the links in that chain that your screening abilities fail to detect, for the whole thing to go wrong.
Oh, but you can get around that by relying on indicators a la the GAVI alliance, can’t you? Well, when a measure becomes a target it invites gaming as you point out, and you are back in the principal-agent signalling arms race.
Maybe the Effective Altruism people have gotten around these issues. If so, I would love to hear how! And I do not mean that cynically. I would really love links to websites, or articles, dealing practically with this problem. Since the government cooperation agency I am familiar with has not, despite probably having vastly more resources to do detection work than an average charity (apart from the biggest US ones).
For the record, I am not in the development cooperation business myself. (Apart from a brief stint in the late 1990s when I was assigned by my government to advice an Asian country grappling with how to set up efficient welfare policies in the aftermath of the Asian economic crisis.) My knowledge stems from teaching stuff like this in a master course, plus gossip over the years from friends who make a living in the government cooperation business. Friends who introduced me to acronyms such as MONGOs (My Own Non-Governmental Organisation) and GONGOs (Government-Owned Non-Governmental Organizations).
Great write-up, thank you. There is something out there that resembles somewhat your last idea, albeit for government: https://en.wikipedia.org/wiki/Social_impact_bond
> That is, funders give them lots of money, they’ve already funded most of the charities they think are good up to the level those charities can easily absorb, and now they’re waiting for new people to start new good charities so they can fund those too
Meanwhile, funding open source projects used by almost the whole industry is apparently still unattainable rocket science.
Is there a simple explanation for how there's apparently a glut of money in charity, and Against Malaria Foundation can still save a life for $5000, a claim which I've been seeing for many years now? Surely there's no shortage of billionaires willing to do that, so why doesn't the price to save a life go up?
> five paragraphs explaining why depression was a very serious disease, then a sentence or two saying they were thinking of fighting it with some kind of web app or something.
Sounds like most grant proposals I've seen people write, and get approved.
I think second-order effects of this process are worth mentioning. A lot of VCs and rich people read this blog, so if you inspire some of them to do their own microgrants, you can surely move orders of magnitude more money.
It also serves to start a discussion in these circles, helping others who need it to become better at this kind of thing and helping you become better at this too, so that next time you can do an even better job. I think a lot of people agreed with your choice of charities, as most are awesome and inspiring. Hope we get updates later too on how it's going. People have different values and I imagine it's way more exciting to fund moonshots than the "boring" choice of giving it all to AMF.
Also, if doing it once was worth it despite all the headaches involved, doing it a second time should be way easier, especially if you get help and prepare.
I also expect a second microgrants program to get fewer applicants, considering most were already evaluated in the first round, so it shouldn't be that hard second time.
Still, thanks for doing this. Hope some of them work out.
V here seems to be a much bigger condemnation of the American banking system than you might have realised.
Here in the UK, I can transfer to any other UK bank account, given the sort code (a six-digit numeric code that represents the bank that the account is located in) and the account number (which are always eight-digit numbers).
There are three different systems:
BACS, which is almost free (a standard per-transaction fee of 35p), has no upper limit on the amount transferred, and takes three days.
CHAPS, which is same-day guaranteed, has no upper limit on the amount transferred, but is much more expensive (£25-35 per transaction).
FPS, which is same-day but not guaranteed (it falls back to BACS in some rare situations), has an upper limit which was £150,000 until recently but is now £1,000,000 (pandemic change, though now made permanent), and has similarly small fees to BACS (45p per transaction).
Personal bank accounts normally have free BACS/FPS transfers (ie the bank eats the fees, though they get bulk rates, so they are paying a lot less), business accounts normally pay modest fees. I use BACS and FPS all the time, e.g. to transfer money to friends much like Americans use Venmo. I have used CHAPS exactly once, which was to transfer the money from my mortgage to the vendor when I bought my house. It tends to only be used for very large transactions where timing is very important (like buying a house).
Doing a series of large transactions like paying out grants would require a business account, would require paying fees - but modest ones - and would require contacting the bank so that a large number of payment doesn't trigger their fraud alerting. But the other hassles you had to deal with would just not arise.
American banking is notorious (to Europeans) as being backward. We almost never use cheques, Venmo is pointless here because you can just use BACS (or the other equivalent domestic systems in other countries), or SWIFT for international transfers. You just need their account details / SWIFT code and you transfer directly to a bank account using the app on your phone.
Even for bigger transactions - I work for a big enough organisation that we have recently hired someone to work full-time on sending out and receiving money in the US because the US banking system is so difficult to deal with.
Since there's so much funding available for AI-related ventures, this got me thinking that some of the ACX grant applications will probably be more in need of staffing than grants at this point. If you're looking for an employee for your AI research organization, I want to work for you! I'm finishing grad school this year and am currently looking for a job doing data science, ML engineering, AI safety research, or anything related. I'm also potentially interested in helping with any projects of the sort that would be pitched as an ACX grant application. I think I get notified of replies here but I can also be reached by email at firstname.lastname@example.org. (Sorry in advance if this kind of comment is only allowed on classified threads; no offense taken if it gets removed.)
Is there a plan to measure the success of each of the grants to determine which ones were successful, moderately successful, unsuccessful? I'm unlikely to run a large-scale grant program, but I am interested in the problem of how to effectively improve the lives of those in my immediate community, as well as those in the broader country/world through small grants.
Perhaps we're not aligned on this, but maybe there are others who read this blog who would like to know how to best distribute their surplus in a way that will persist after the initial infusion of cash is given. Where can my extra dollar do the most good for others? I feel like paying off a credit card debt isn't a good use for this, since it feels about as effective as chasing rats out of the house. It looks good for a minute, but they'll just come in around the back anyway. It's not solving a problem, so much as pushing the problem back a few months. I'm really looking to answer the question, what can I do that will change things - even if in a limited way?
I guess the answer to the question is "probably not a grants program", because a grant would require a deeper knowledge of the subject and circumstances than I can give.
I think your experience here outlines a major objection to the Rationality movement - the information needed to calculate what the "best thing" to do isn't available. Or, at least, isn't available at the cost the difference in information would make. That is, if you had to choose between paying market-rate for the free specialist consulting you got or instead having to do without, the cost you would have to pay would have been larger than the difference the specialist knowledge made in terms of effectiveness of grant distribution. So (benefits of uninformed selection) > (benefits of informed selection + cost of specialist knowledge). This works when you are able to get specialist knowledge for free.
For the typical person dealing with typical decisions, the cost of specialist knowledge will exceed the cost of the benefits of informed selection. Meaning that uninformed decision-making will be the rational choice.
Thus being irRational is rational.
An interesting project would be to create a crowdsourced approach to grant funding clearing. Imagine a site where someone seeking a grant could upload their proposal and then pre-approved volunteers with relevant experience in the relevant domain could review it for the obvious problems. Amazon's Mechanical Turk is almost what you want for this, notwithstanding the initial credentialing problem.
I'm surprised nobody in these comments have mentioned the Heilmeier Catechism, so I'm mentioning it here: https://www.darpa.mil/work-with-us/heilmeier-catechism
Many years ago, I read a piece by Bill Gates in the NYT, about what he had learned in 5 years in his non-profit work aimed at eradicating malaria etc. He basically said, he had failed.
It was humble, transparent and introspective. I tried to look but could not find the essay. I think there is much to learn there, on how to figure out where to invest your money, to help people the most.
It seems to be surprisingly hard to give responsibly.
Why not go the "crypto" (that is to say, cryptocurrency) route? Buying (e.g.) bitcoin is not generally hard (for the amounts I tried so far, which are four orders of magnitude smaller), and once you have it in a wallet you actually control (i.e. have the private key for), there are no practical limitations on how much you can transfer.
I can think of a few possible reasons (all of them with a question mark):
* Buying bitcoin for millions of dollars is hard to do in the US
* Doing large BTC transactions will get the IRS on you or get you into trouble with money laundering laws
* The recipients were reluctant to accept bitcoin for legal (see above) or organisational ("we are a respectable university department, we don't do bitcoin") reasons
* Computer security concerns
* Usability concerns (e.g. sending grants to invalid addresses due to typos)
Of the ACX ++ grants, it seems that a total of three applications include a bitcoin address, which I would consider convenient for donating smaller amounts (as opposed to contacting strangers using a publicly stated email address and just hoping their mail account opsec is good enough and that they won't guilt-trip you into donating more than you wanted to or anything).
<i>(2) Most people are terrible, terrible, TERRIBLE grantwriters</i>
I do grant writing for nonprofits, public agencies, and some research-based businesses: http://www.seliger.com. The fact you've noticed is why we have a business. Effective grant writing is extremely hard, and most people aren't good at it. ACX grants are relatively small, and I often work on projects in the $500k - $5 million range; we work on a flat-fee basis, typically in the $5,000 – $15,000 range.
Many people who are great at what they do are terrible at grant writing.
are there not fees with crypto transactions? both the fees of the actual transaction, plus the fees of converting dollars into, then out of, the currency?
I think you're misusing "comparative advantage" in a way that may be important.
Comparative advantage is relevant when comparing two resources that can't be directly exchanged for each other but can be used to produce the same outputs. In that case, even if one resource is less productive when producing either output, there are gains to trade as long as the relative productivities of the resources differ. In the classic example: I can use an hour of my time to make one fusili sculpture or to make two cucumber pizzas. You can use an hour of your time to make two fusili sculptures or three cucumber pizzas. Your time is more productive than mine at everything (absolute advantage), but I only give up half a fusili sculpture by making a pizza, whereas you give up 2/3 of a fusili scupture, so I have a comparative advantage in pizza making.
In this case, the crucial resource is money, and it's directly exchangable. If you think that givewell is better at grantmaking than you, you can give them your money. If Scott Aaronson is almost as good as Givewell at identifying STEM education grants, and much worse at identifying global health grants, he has a comparative advantage in STEM relative to Givewell. But he'd still be better off giving his money to givewell than making grants himself, even if all of it was for stem education.
Maybe the equivalent is to imagine that you could buy a $200 blender that could puree a soup in 10 minutes or crush a pound of ice in 15 minutes, or a $200 blender that could puree a soup in 30 minutes or crush a pound of ice in 20 minutes. The second blender has a comparative advantage in ice crushing, but since it's absolutely worse at both tasks, you should never buy it.
The lesson here is: only make your own grants if there's nobody who you can give the money too that could make better use of it than you can, in an absolute sense.
> so I jury-rigged a Biology Grants Committee out of ...
FWIW, I think you mean "jerry-rigged". See: https://www.dictionary.com/e/jury-rigged-vs-jerry-rigged/
>And that's an easy one. What about B? If the professor figures out important things about what influences gender norms, maybe we can subtly put our finger on the scale.
She won't. She's a woke loony and her demented ideology will prevent her from ever producing any meaningful insight into the world.
Maybe there are heritable population differences that explain most of the variation. Maybe it's a result of industrialization, and heritable factors affect whether/when a population can undergo industrialization. Maybe these things are true, maybe they aren't, but Alice Evans will never in a million years discover that they're true because her ideology prevents her from even thinking about the world in this way to look there. And if somehow this leaps out at her, she's sure as hell never going to put that in a book.
Am I the only one who sees a paradox in:
"If i miss allocate 5000$ I am basically killing someone in africa" and "Big foundations have more money than opportunities"?
I found this whole post really affecting, and it has rejuvenated my respect and interest in ACX. But it also left me with two conflicting realizations.
It helped me understand why I've always been wary of utilitarianism as a moral code and effective altruism as a practice. Firstly, because it's kind of counter to the human moral instinct - which is just to do what you think right at the time. And secondly because it's so bloody difficult - almost impossible in fact - to sort out all the confounding variables and unknown cascading second order effects.
But it also convinced me that if I was going to give a lot of money to a grant-giver I'd want it to be Scott. That level of genuine concern combined with serious questioning is pretty rare. And I think good can come out of this if it can come out of anything.
My Heuristic That Almost Always Works is - everything is far more complicated than you think it is
Thanks for sharing your grantmaking experiene. I wish other grants programs did more of this.
Please do a follow-up for the ACX Grants++
How many got funding? How many got some other help?
A vague thought... I wonder how many of these grant proposals basically reduce to "Give one or more people a basic income guarantee for a while, so they can buy food while they focus on whatever they naturally want to work on." And would therefore be moot in a world in which UBI exists.
If that's a real class of grant proposals, could one build a useful heuristic around that class? Like, if solving a problem requires money for reasons other than feeding the people who are already enthusiastic about working on it, does that somehow make a targeted grant more beneficial than otherwise?
I work in grants (both proposals and post-award administration) and have some suggestions on how to cut down on the garbage applications. The application form is no longer viewable so I couldn't check - you may have been doing some or all of these things already.
* Require an itemized budget as part of the application.
* Ask applicants to answer several specific questions. Instead of just "why should we give you $," break it into sub-questions, and place a word limit on each. Common things to ask about are why the topic area is important, what expertise the applicant brings, what makes the project uniquely likely to make a difference, and how the money will be spent (this is often a separate document called a budget justification). This will cut down on needing to read through pages on why depression is bad followed by "we're gonna make an app."
* If there are certain things you will definitely not fund, state those up front. Talk to a lawyer about this beforehand as there are some weird rules around international giving in particular.
* Have applicants indicate whether they are applying as an individual or an organization. If as an organization, require proof that the organization actually exists.
* If you plan on imposing any conditions on the funding, state that up front. Common ones include not paying until you receive an itemized invoice of expenses incurred, requiring progress reports and/or a final report at the end of the grant period, and limiting the recipient's ability to re-allocate the funds. That said, more conditions = more work for you, so for this type of program it's totally reasonable to just throw cash at people with no strings attached.
Why aren't grant proposals public? If all grants were put on a public website, like the EA forum, and funders could look through them, and anyone could contribute their expertise, what would be the downside?
Regarding EA having too much money and not enough ideas to fund -
There are a lot of smart people who I’m sure would love to dedicate their minds / career / time to solving tough EA adjacent problems. However, if you’re in a top tier job / career you don’t have the time / mental energy to get close to solving or even thinking about solving some big problems. Based on the too much money thing it seems like “earning to give” does not have the same value it maybe use to? It would be cool to see some sort of EA charity start up incubator that would pay good salaries and could lure talented people away from top tier jobs and give them time to tinker / build / think up solutions for solving problems that meet EA criteria. Would be classed as a high risk fund from an EA perspective but how beneficial would it be to discover / build another charity that has impact on par with the against malaria foundation?
Hey! I was thinking about “Because you have a comparative advantage in soliciting proposals”
I’ve also long thought about diversity and representation problems in EA/entrepreneurship/funding/many of the adjacent spaces.
So I had this idea: what if you tried to source proposals from interesting people throughout the world, the developing world in particular. The goal would be to find suitable candidates that had never even considered the possibility of a grant. The invitation could be as simple as guaranteeing that their proposal would get read.
There’s a couple ways you could do this, here’s 2 that come up for me:
1. Ask people who have done a lot of traveling “have you met anyone while traveling that are XYZ?” Where XYZ are factors that might indicate your values. I personally might say something like “unassuming/humble, community oriented, generous, smart, potential for impact”. In my travels I’ve met several such people, like https://notmadyet.com/ and http://gogreatergood.com (both of whom are EA aligned!), and more. There’s plenty of more well known travel bloggers too.
2. Ask people who have spent real time abroad doing humanitarian work the same question. I first thought of people that have taught English abroad, done habitat for humanity, etc. If you did a survey of former peace corp workers of “name up to 3 people you thought could do a lot with this grant”, any names that came up more than once would likely be pretty interesting candidates, in my opinion.
In this model, there’s a middle source layer. You’re relying on their judgment to be reasonably sound, then to have spent enough time somewhere to have more than a passing understanding of the landscape, and for them to roughly values aligned.
There’s near infinite possible problems with this. Yet given the typical value of diversity, deeply connecting with “users”/“customers”/beneficiaries of ones work for feedback, and context awareness (I think you could reasonably call these candidates “subject matter experts” in the issues facing their local communities, at least), I think this would be a stream worth at least experimenting with.
I’m happy to do a bunch of this work, or at least get it started, if you (or another grant maker) is interested.
I think there is an explore/exploit trade-off things going on with charitable giving. I think it is a reasonable approach to put a percentage of your money into 'explore' activities and a percentage into 'exploit' ones - nets to prevent malaria obviously being in the latter category for example, whereas it feels like the strength of a grants programme like this is to support 'explore' activities, some of which you are going to expect to fail.
I've been a referee on small grant proposals and it is hard though - everything you say resonates!
I think there is an explore/exploit trade-off thing going on with charitable giving. And I think it is reasonable to put a percentage of one's money into 'explore' activities and a percentage into 'exploit' activities. So for example nets to prevent malaria obviously falls clear and square in the 'exploit' category for example. The strength of a programme like this is probably to support 'explore' activities that wouldn't otherwise find funding easily because there aren't currently any good sources of funding for the type of activity in question.
I've been a referee for small grant proposals for a funding council though and it is hard - everything you say resonates!
Matthew Brooks sent me here because I have been participating for the last couple of years in an experiment that is very similar to what you have described. I was first a “savvy investor” in a project in San Quentin Prison (https://npxadvisors.com/impact-security/the-last-mile/). Then I was one “Funder” (NPX calls me a Donor) out of about 10 funders that put up ~$10 million in the Colorado and Northern California Donor Impact Funds. We donors are committed to be a buyer of “impact” economic mobility outcomes for the next 3-5 years from 7 nonprofits (out of an application pool of about 80) for which investors have pre-funded with ~$8 million. Now the driver of all of this activity is NPX Advisors, who is now raising about 10 more such Donor Impact Funds (half domestic, half international).