“I don’t consider us to still be dating” - Brutal!
What an interesting job. Never having considered it before, I would look at their previous work and see how much of the grants were wasted on excessive management fees and bureaucracy instead of actually going on what they were intended.
I think you should ask people the following question, before funding them. "Why shouldn't I directly give my cash to the victims you want to help? I mean, what is the benefit in giving to them through you?"
Good point. Because it is to prevent something bad that hasn't happened yet. No victims yet!
Another place direct cash charity fails is in sending cash to victims of a currently active natural disaster. You cannot reach them. Have to use a "middleman" such as the red cross.
If you stumbled backwards into the concept of a VC field, I think it's plausible that you might do well stumbling further into "incubator" territory. A lot of your issues seem to stem from idiosyncratic persons who haven't written a grant proposal in their lifetimes, especially not a successful one. The shoe-string version could be a virtual session and a used copy of a proposal writing textbook mailed to the individual and 3 months before opening the door. Or you could consider something more personal and intense, Y-combinator style, or anywhere in between.
Self reply with anectdata: the number of people who would make intelligent use of a lump sum of cash pales in comparison to the number of people who would benefit from not worrying about their normal expenses for 3-6 months and some serious guidance for their unrecognized talents.
Why not simply judge whether or not you’d be very happy to fund each grant, and then just randomly choose grants to fund from among those? Much simpler to administer, avoids tying yourself in 4D knots, and the losers of this process can then be publicly posted with no negative signaling risk.
I think this absolutely makes a lot of sense - there's some point in the evaluation process when the difficulty of fine-graining more instances at the boundary gets much higher than the expected value of getting that decision right, so you might as well switch to random at that point. And since that point is where you start getting a lot of the perverse incentives about pitching to the specific evaluator, by randomizing at this point you can reduce incentives to do that.
Two aphorisms from VC types I know (who have actually made serious money)
- you bet on people, not companies/technologies
- usually it's more important to make a decision (any decision), but make it fast enough, than to make the optimal decision.
Regarding the latter, one of the things that constantly amazes me about when I see these people in action is precisely this "make a decision fast, and don't look back" aspect to how they live their lives. This is not a claim that the decisions are thoughtless, or coin flips; rather
- the appropriate amount of pre-thinking has gone into the issue so that when the specific circumstances require a decision it's an informed decision
- excess time is not spent on trivial unimportant decisions (to paraphrase "just buy the first one that looks good enough on Amazon; it'll probably do the job, and if not you're out a few dollars which is worth less than spending five hours on internet shopping research"
But I will be the first to admit that this is not an easy skill to acquire, especially the learning not to regret whatever decisions turn out incorrectly (as some inevitably do).
That's surely true at some point but I'm not convinced that it is realized here. It stands out to me that the extra info Scott kept getting was the kind of info that massively affected his subjective estimate of proposal success. Isnt it usually only when it only makes small differences that it makes sense to stop looking for more info?
Indeed, isn't there a common fallacy where ppl stop trying to save money on big purchases once it drops below some percentage of the price rather than looking at the absolute size of the effect?
Administering this amount of money seems like something where spending an extra couple of weeks of full time labor picking better outcomes might be worth it.
But if it becomes known you're doing this, then you'll get a lot more submissions of "just good enough" grants by people who want to exploit the process. Heck, I'd spend twenty minutes writing a decent-at-first-glance grant proposal if there was a chance of a free $50K.
I really don't get this -- if people could write lots of "just good enough" grants (which, remember, the line is "Scott feels very very happy funding it at first blush", not "Scott vaguely feels good about it"), then why wouldn't they also submit those to the actual grant program? If you can write a decent-at-first glance proposal maybe you SHOULD get the $50k!
The supposed internal reason for feeling good about grants is believing that they are actually used to achieve the proposed purpose. Melvin here talks about having zero intention of actually following up with their grant proposal, and having zero faith in it actually working out. The cause is unachievable by default
People can already write extremely bad-faith grants to the existing process, though, and they didn't appear to? Why would slightly tweaking the approval process change who is applying? Do you really think you'd get an influx of fradulent grants under this scenario?
When people write grants they by default assume that their work will be professionally reviewed (and that's what Scott really tried to do here). They assume that half assed grants without any real perspective have approximately zero chance of winning. Preparing a convincing grant is a lot of work for something that has zero chance of winning.
However, if people realized that their grants are not checked too thoroughly, it would push a lot of them to try to pull a scam, as they now know they have a real chance of getting money. Of course, this only happens if information about Scott's decision process somehow becomes known to the general public. And there's a big difference between "being a bad reviewer" and "people knowing that you are a bad reviewer". However, there's still a risk. Like, Scott is doing another public post about grants, and then some commenter points out that it's clearly obvious that "grant X" cannot possibly work and everyone who researched the topic even for a minute would know that. And it would be a big hit
Google "john wittle education grants", try to find my tumblr post series, or my lesswrong posts, or the time Scott talked about my comments back on SSC.
Yeah, but you have to account for how you spent that $50k. If it's "I bought four houses" or "I paid my ex-boyfriend" (see Freddie's post here https://freddiedeboer.substack.com/p/white-journalists-are-terrified-of) then eventualy you have to pay it back or make some return of money you obtained deceitfully.
Of course, if you're a professional scammer, you probably spend *another* twenty minutes weasel-wording the proposal such that "yes I bought four houses, see the small print, and these houses were for the benefit of the charity (i.e. me)" gets you technically off the hook.
You've already made the point very eloquently in this post that you don't have enough wisdom/discernment/predictive ability to allocate the grant money to a level of percision of 5%. By doing the grant at all you've already accepted that you will misallocate some money, but that's fine because you'll get some big wins as well and (like an angel investor) you'll "beat the market" by saving more lives than donating everything to a charity. So I think it's more like:
1) Go the way you did, beat yourself up judging the grants to the point that you're miserable, misallocate the grant funds by 20% anyway due to bias in your reviewers. Also, systametically avoid funding some really good grants because of biases in your experts.
2) Go the random route, misallocate the grant funds by 20% due to your inability to even do the "smell test" well. Avoid lots of bias for free, maybe pick up a grant that "everyone knows" can't work, and have a really easy time administraing the grant. You feel really good and ready to do it again next year.
Imagine if you had done it the random way -- you would have personally had a better experience and could more heartly recommend the grant process to other people, instead of saying that it's likely to make you miserable!
Why do you assume the random route has the same level of misallocation as the route Scott did take? Funding based on feelings seems like it'd very easily get a higher error rate then funding based on research Scott isn't super good at.
I mostly just don't trust experts a huge degree. I think it's sort of like how monkeys throwing darts can do better than a lot of hedge funds. The key insight is that if you have experts involved they often miss things that everyone "knows" are impossible. I've seen it in SO many scenarios that it forms a big part of my intuition but it's hard to communicate and sounds silly. I also think that randomness has powerful salubrious effects in avoding weird bias.
I also think Scott has better intuition than he might imagine -- I trust Scott's gut feeling followed by random allocation more than I trust any panel of experts.
That's a fair critique of investment management experts (and mostly true!) but saying you don't trust experts in general is pretty significant! I mean, i trust a surgeon (an expert in surgery) to do a better job operating on me then i do a random non-expert, so i think there's a significant degree in which trust in experts is an important thing to have. Not total trust, mind you, a better surgery would probably be done by having an expert AND a non-expert observer (maybe), but i think having an expert at all is really important.
I mean, as Scott posted in his examples, how should he differentiate between 2 biomedicine grants that he knows very little on without consulting experts? Going based on emotion here, if you trust Scott to be accurate in his reported abilities, would just be a coin toss.
To respond to your second point: I think Scott should read the proposals and then use his intuition to either qualify or disqualify each one. Put the qualifying proposal(s) on the "to fund" pile. Then when he's done randomly allocate funds. If both make it to the pile, then they have an equal chance of being funded. I think in Scotts's case his intuition will be reasonably accurate, and in any case it's him being directly responsible for the process and not defering it to others, and not stressing him out too much so that he and others do it again. Sometimes you can't differentiate things well, and that's OK. I don't find the agreement between the experts to be that useful, it's exactly what you'd expect if the field itself has biases, and at least for grants it pays to sometimes buck the trend. I trust Scott's intution to be simply better for these purposes than experts.
To respond to your first point: I obviously have something like "trust" -- I ride planes, I drive a car, I use the internet, and I would address a severe medical problem by (very reluctantly) using the medical system. All of these things are built by people and they do work (and in cases like planes they work very well!) But through experience I've learned to sharply limit my trust to avoid it "leaking" in ways that aren't warranted.
For example, I believe that artifacts produced by people like cars can work well. I don't believe that a group of "auto experts" would be likely to evaluate grants relating to improving automobiles better than Scott. I don't expect them to be able to build cars themselves if they were stranded on a desert island. I expect that for most of their beliefs about cars, they haven't put in the effort to truly evaluate their beleifs against reality. Instead their beliefs are a copy of whatever their field says, and likely to be all biased in the same way. In reality It's even worse than that -- I expect that in a group of car experts about half of them wouldn't know very much about cars at all!
I believe that something like this is true for practically all groups, and it's my default assumption. I believe this because it's proven true in my life over and over again in many fields in which I AM an expert personally, and I've actually updated on that information instead of Gell-Manning my way through the rest of my life.
If you can really understand that investment management experts aren't good at their jobs on average, why shouldn't that knowlege propogate to other groups as well?
(note: a few very rare people can genuiely be good at things and deserve a lot of trust; these are the people who design the stuff everyone uses. But these are individual people, not groups. I think Scott is a worthwhile individual person to judge stuff and that most modifications to this simple process will just make things worse.)
The reason why that is true in finance is that prices have been optimized to be fair, so it’s hard to make good or bad choices. That is…not true of the grant pool, probably even after an initial pass.
Side note: one thing that I've recently learned is that a lot of hedge funds exist not to generate abnormally-high returns as much as returns which are uncorrelated with the market.
(You probably meant managed mutual funds which are trying to beat the market; that I missed the goal of hedge funds for so long makes me want to share it whenever I can.)
Well, the reason you want uncorrelated returns is so that you can average many of them together and slap on leverage, to get higher absolute returns.
In any case, the goal of hedge funds is to make money for the people running the hedge fund. And that mostly means having a lot of money under management for a long time.
I'm not sure it makes sense to say I'm not good enough to allocate to a level of precision of 5%. If I am terrible but try hard, I might go from a D- to a D, which is 5% better.
Fair enough! (assuming you mean D- to D?) I personally see trying really hard to evaluate grants more as trading away the bias you know for the bias you don't know, but I can see how a lot of work, at the margins, might allow you to predict that a grant won't work because it's got some fatal flaw. (but then again, the grant that really gives you 10,000x return will probably be an opportunity that hasn't been funded yet because it SEEMS like there's something wrong with it but in reality there isn't.)
But in any case, is that extra work really worth it compared to a more streamlined approach? I think that at the level of $1.5MM it's genuinely worth it to burn 5% ($75,000) just to make it easier for you personally, since it means the difference between you feeling like it wasn't too hard vs feeling like it was an ordeal. That's important for establishing things like this as a community tradition. I really wish you could have written a post saying that the grant is actually fairly easy and fund to administer and that's worth a lot in and of itself!
Another way to think about it is that lots of charities have MUCH higher overheads than 5%. If that's the price for making it sustainable for you then it seems worth it to me.
I don't know if this is reasonable, but it could be possible to see this year as an investment that's mostly about learning how to evaluate grants, with being effective manifesting in future years.
Short version: it takes way more time than you'd think. To be a successful angel investor, you pretty much have to know your founders' businesses as well as they do themselves.
"Even though angel investing looks like this casual, easy and fun activity, make no mistake about it, if you want to avoid losing your shirt, you spend a LOT of time on it: finding deals, vetting companies you’re interested in, and then once you invest, working with them like hell to make them succeed.
Just one example: I invested in a custom dog toy company, PrideBites, and have probably spent at least 500 hours over two years learning about the dog toy space, the dog retail space, and the complexities of Chinese manufacturing and logistics (so I can better advise them). Not to mention, another 500+ hours I’ve spent with the team helping them through all the hundreds of issues that come up. [...]
That’s almost a full time job–and it’s only ONE company. "
(yeah, that's the same Tucker Max who used to write stories about getting loaded on Everclear, fucking midget strippers, and having diarrhea in hotel lobbies. He's had some life changes.)
Nitpick: 1000+ hours over 2 years is roughly a quarter of a full-time job (at least in my part of the world, not sure about the US). But Max's point is well-taken.
Re: the "impact certificate-based retroactive public goods funding market" -- you may want to check out social impact bonds if you haven't already: https://golab.bsg.ox.ac.uk/the-basics/impact-bonds/. It sounds like a somewhat similar concept, albeit without the prediction market component. It does, however, have the advantage of having been implemented in practice.
You mention Tyler Cowen a couple of times, but one nice lesson from Stubborn Attachments, is that when uncertainty abounds, look for the option with the largest upside, which rises above the froth, where upside is defined from an optimistic perspective. Interestingly, this ties in to a core idea in machine learning 'optimism in the face of uncertainty'. The one sentence claim there is that, if you are optimistic and try something either 1) everything works and great things happen or 2) everything goes wrong, but you learn a lot which you can use to update your world model for future grants. You can also update everyone else's models, when they see which funded projects succeeded.
That's a great quote. If I ever snap and become a genocidal dictator, I'm going to put "optimism in the face of uncertainty" over the gates of my extermination camps.
Re: grant-writing, for my fellow scientists out there, the best advice I ever got was to focus on "knowledge gap". You have to articulate what is the thing we don't know that you are going to figure out. Then you explain how you will figure it out. This isn't like some essay where the overall concept builds over the course of X pages; you have to articulate the knowledge gap explicitly enough that it's crystal clear to someone reading ten grants in one day. You can even put the pivotal sentence(s) in italics for extra emphasis.
I tell this to you so that if your ideas are better than mine but your writing is worse, you can beat me fair and square.
I think that making ballpark guesses might have been preferable. Once you rely on others, you might end up with a selection by committee that looks pretty similar to what existing orgs are already doing, except that they have more experience doing that. But your own perspective can't be replicated by others.
The bigger problem is simply that there were too many proposals to sift through. Maybe a shorter word limit could have helped, as you can always request more info later.
I wonder if it would be reasonable to require some small payment (e.g. $20) to apply. The proceeds could go to charity or to increase the grants pool, and it could reduce the nonserious applicants by a lot. Though I also expect there'd be some good applications that would be put off by this, more negative feelings by people who aren't picked, and maybe even some legal issues...
There may be a local Community Foundation or Charitable Foundation in your area. This is an entity that allows people to set up their own charitable funds under the legal umbrella of the main entity. These can be funded in various ways and operated in various ways, including donor directed. That might be an answer for your money transfer issues.
"This grants program could be the most important thing I’ll ever do."
Almost certainly not. You have a proven ability to write outstanding, insightful blog posts that get thousands of exceptionally smart people thinking, some of whom will be influenced to do important good things (or stop doing important bad things).
What are the chances that you're *also* astoundingly good, or even pretty good, at administering grants programs? Small.
Stick to your knitting. Find someone else who's good at the grants stuff and get them to do it.
I think my writing is influential because it causes things like people giving me $1.5 million for grants. I think allocating $1.5 million to charity is inherently worth a *lot* of writing, especially if it's true that it can save ~300 lives.
I think you're underselling your writing here, which has made a big impact on a lot of people doing important things; it's hard to say how many lives it's saved, but it's definitely added a whole lot more than $1.5 million in value to the world, however you want to measure that. Having said that I think the grant program was also a great idea simply because these skills seem pretty correlated (or rather, both highly correlated with understanding and evaluating complex systems).
Scott, $1.5M is pocket change compared to the *existing* influence of your writing, let alone the marginal influence if you spent the effort on more blog posts instead of grants. This is not hyperbole.
One example: Figure the net present value of increasing the chance of abolishing the FDA by 1%.
Stop staring at the coins in the sofa cushion - there's 10 billion dollars in your living room.
“The ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist.”
― John Maynard Keynes
He was right about that (at least), and it applies more generally to ideas and viewpoints that shape the world.
I share the intuition that Scott's writing is way more valuable than his grant making. But I view this sort of activity as collecting experiences that will enrich and inform his future writing (it's already happenning!).
Scott writes his insights that are a result of his experience. If he stopped having new experience... well, his backlog of insights is probably long enough to sustain his writing for a few more years, but maybe the quality would drop, because not all insights are equally important or equally inspiring, and the best ones were probably already used. The experience with grants was new, and as a side effect we got a high quality article on a new topic.
Also, there is a potential for future articles, if Scott decides to investigate the outcomes of the grants.
"Another person’s application sounded like a Dilbert gag about meaningless corporate babble. “We will leverage synergies to revolutionize the paradigm of communication for justice” - paragraphs and paragraphs of this without the slightest explanation of what they would actually do. Everyone involved had PhDs, and they’d gotten millions of dollars from a government agency"
As they say, the answer is in the question. They're accustomed to writing proposals for government grants, which means they have to shovel in all the buzzwords to show they're hitting the goals for which the grant is established.
My job before my last job was like this; a new thing that came in was a monthly report from every centre about hitting a bunch of goals set under targets listed in separate sections in a newly-created overarching statement of achievement.
What it meant in reality was "what did we do this month?" "well, a bunch of the same stuff we do every month" "okay, pick out three things, then trawl through all the bumpf and pick out headings to say we did this, that and the other", which I then wrote up with copious chunks of the jargon scattered on top.
You couldn't say "this month we recruited six new members for our programme", you had to show how this came under Goal No. 15 of Section 4 of Unit 12. It was feffin' stupid, everyone involved knew it was feffin' stupid, the ideal was that every centre was sitting down for a meeting at the start of the month to plan out our New Goals For This Month but in actuality it was "oh crap, the monthly board meeting is in three days time, quick, write up a report about our New Goals to be presented there" and I was one of the people just pulling stuff out of - the air for such reports. But it filled the real aim of the exercise, which was "here is a Thing we are supposed to be doing, and now we are reporting that we are indeed Doing The Thing".
Yeah, any public/civil service job will be the same. I think it's because a lot of things come from top-down initiatives ("the government has decided that they will encourage healthy eating, so this is The Year of the Cucumber!") and because a lot of funding comes from The Government, you have to comply with the strictures.
So even if your work has damn-all to do with cucumbers, you still have to Show How This Project Encourages Cucumbers (Growing, Eating, Making Available To Public, Pickling Or Raw).
Part of the problem is that you get directives and policies set by people who come out of environments (like academia or business, governments love trying to run public service along business principles even if it's like cramming a square peg into a round hole) where it's *natural* for them to use jargon like "leveraging synergies" so they write up proposals full of this type of guff, which get converted into policies, which get sent along to your particular organisation as "this is the model we want you to use".
End result - you can't write basic descriptions of what you actually did, like "recruited new clients for programme", it has to be the synergistic paradigms stuff.
I am once again reminded about how amazingly backward the USA is in something as basic to a market economy as 'making payments to people'. And I guess the entire world, to be fair. It's just surprising that a country with an image of itself as such a capitalist Mecca would be so bad about it. Or maybe the fact that the payments systems are so heavily gatekept is the lesson in itself?
When you look at a field that involves literally millions of people and literally billions of dollars and see issues, it may be that you are some visionary who can see further than others.
But more likely is that you simply don't understand many of the issues involved, from security, (un)revocability, and fraud tracking to tax to the law (Patriot Act puts severe constraints on randomly moving around more than $10,000 to unclearly verified third parties).
My understanding is that the main issue with money transfers in the US is that there's just so damn many banks. In smaller countries it's easier to get all of the banks to agree to some standard, but the US is overrun by tiny local banks where the whole computer system is a bunch of Wangs run by some guy called Steve.
From 1927 - 1994, interstate branch banking in the U.S. was forbidden by the government. This had all the usual fun unintended consequences, like more bank runs (because less geographical diversification, and thus more exposure to things like droughts in farm country, natural disasters, etc... compare to Canada's better experience) and lots more smaller regional banks than a market system would've ended up with.
On top of that, the primary clearing house in the U.S. is run by the federal reserve, yet another government agency. Again, not usually going to stay ahead of the technology curve (see also SWIFTNET).
Finally, as noted above, the financial industry is one of the two most heavily regulated industries in the U.S., the other being healthcare/health insurance. As a result, there a many, many regulatory barriers to moving money around, and very few encouragements. Most of those involve required paperwork for you or your bank (or both!) to do, lest you, for example, be considered a money launderer.
Interesting. So the big banks like Bank of America, Wells Fargo and Chase were local banks as recently as 1994? Or they had some weird structure that allowed them to operate nationwide?
Some of them operated as bank holding companies (and still do). So they'd have a local Wells Fargo CA and Wells Fargo AZ, and a Wells Fargo Commercial Banking, and a Wells Fargo Investment Banking (to fall outside the rules) which would be organizationally separate, but all owned by the same set of investors.
This basically started in the 80s when states began to legally allow holding companies to own banks in multiple states.
USA is hilarious behind times. Not even using outdated from of credit card, from what I heard you still uses checks and bank transfer between different banks take longer than a single working day.
For reference, in Poland free transfer get within 12h on working day between any two bank accounts and if you pay extra: within 15 minutes.
Disclaimer: I have never moved 1 500 000 $ or similarly large sums, but as far as I know large companies use the same system.
On the other hand, my first response to Scott's dilemma was "you know checks are still a thing, you could just write one and put it in the mail?" At least for the US grant recipients. Checks are not the *most* timely and convenient way to transfer money, but they do at least bound the inconvenience and at a level below what it seems like Scott was up against.
I used to do freelance work for US companies and most of them paid by checks. During 7 years I think I had 2 or 3 cases when the check was lost in mail and had to be reissued again. I wasn't bothered that receiving money via checks was very slow process. My biggest problem was that European banks charged high fees for depositing them.
I think that the US kept using checks longer than other countries because of convenience. Indeed, the time to write a check is less than sending wires and probably less prone to errors. Although cost of fraud could be higher.
Except that the USA is the odd one out. For the longest time, I used to say that America for some reason was developed in other ways, but had the consumer banking sector of a developing country. It made sense because the rest of the developed world was just so much ahead of the USA.
This no longer works, because today most of the developing countries have better banking than the USA does.
In European Union it is trivial (3 minutes of work, delivery within 12h of working day for free, within 15 minutes if you pay for quick transfer).
Fraud happens (usually in form of social engineering of various kinds) but seems less impactful and easier to avoid than check-based forms of fraud in USA.
I strongly suspect that if Scott had got this “Medallion Signature Guarantee” (which probably is a simple electronic signature generator device), his experience making wires wouldn't be much different from European experience.
The difference is probably that most Europeans have never used checks and for them it is natural to get a cryptographic device or nowadays simply an app on your mobile that you have to use to sign your bank transfers whereas in the US most people have inertia to spend time to figure it out, even though it would make their lives easier.
Also you can imagine that Scott's time is now very valuable. Apparently his income stream is at least $5000 per day. If he needs to make about 100 payments and each payment takes about 10 minutes (let's be real, he might need to look up for account details for each payee, sometime clarify them, double check entries for mistakes, electronically sign them etc.), it would take 1000 minutes or 2 full working days ($10,000 worth of his time). It is much more reasonable to outsource this work to someone who is more experienced and can do it cheaper.
Bad side of European banking system is that verification method is often either app (commonly refusing to work on degoogled or rooted Android phones and with other similar issues) or SMS (SIM card hijacking is easier than most people expect). Proper separate devices are rare.
> I take it all back. The crypto future can’t come soon enough. Sending money is terrible.
We could also just change the regulations or use various interventions to actually get it up to standard. FinTech in the US isn't as bad as some places but there are places that totally eat our lunch. The Chinese have managed to do better without using crypto. Or the Koreans if you want a democratic version.
> Big effective-altruist foundations complain that they’re entrepreneurship-constrained.
They're wrong and/or lying. The incentive of funders is to encourage people to apply. It makes them look more selective and puts them in the position of rejecting rather than soliciting. Sometimes they find a gem and they're all very adept at tearing through large numbers of applications. A bad application can be quickly dismissed by some admin or another. Further, people looking for funding will generally do a circuit where they apply to dozens or hundreds of places.
The other two (an advantage in getting funding and an advantage in evaluating applications) are the important ones. As you've discovered, there are people who optimize around being fundable to grants. It's practically a career and there are entire industries of consultants. The real advantage would be having some ability to identify people who can accomplish the goals of big charities but don't look traditionally fundable. The people who get rejected by everyone else. The issue with this is that it's hard. The thing about Harvard is that 90% of Harvard MBAs are going to be a solid B. A few might be A+s. But B- is as low as most will go. The general population can range from A+ to F's and you need to find a way to figure out how to determine it a scale.
Anyway I've never ran a grant program but I've definitely worked as a judge. Will it make you feel better if your experience sounds typical? I'd say applications usually break down something like:
50% so awful they have no chance
30% that are okay but definitely below the cut
5% that are good or great but just not a good fit for this particular grant/program
10% that are good but not great. A few of these get through if they fit a particular need.
5% that are really great. These are your pool where every cut hurts.
And then the occassional, rare slam dunk that's definitely getting in. But these are rare enough you often get 0 in a round.
If you have to adjust the numbers: subtract from the good ones and move it into the worse tiers.
(How does this square with not being entrepeneur restrained? Most grant programs have tiny, tiny acceptance rates. Let's say you have a pool that's only got 7% good-great applications. If you have 1,000 applications that's 70 good ones. If it's a $50,000 grant program that's $3.5 million. More money than you probably have. If you have $1.5 million then you're accepting about half. But you're funding constrained, not entrepeneur constrained. Still you're encouraged not to think of it that way for the reasons above. Besides, you really want to have all top 1% applications so you can convince yourself the 5% was a marginal application.)
PS: I'm curious to hear about this AI charity stuff. I'm got connections to the non-profit and such world and I've never heard of these. But I might be on the wrong coast?
>We could also just change the regulations or use various interventions to actually get it up to standard. FinTech in the US isn't as bad as some places but there are places that totally eat our lunch. The Chinese have managed to do better without using crypto. Or the Koreans if you want a democratic version.
In the US can't you just mail a cheque? You gotta pay for a stamp and it takes a couple of days, but that's better than paying 2% to Paypal.
That was my thought, too. There are some limitation I know of: depositing checks can be inconvenient, especially checks large enough to trigger procedures for detecting fraud and tax evasion, and they can take several days before the funds are available (due to the US still using a check-clearing automation system that was first set up in 1972). But unless I'm missing something big, it seems like a better option than PayPal or wire transfers.
Sending by mail is much slower and less secure. People can and do steal such mail. Besides, in some places you can have a completely secure transaction that takes a matter of hours for big transactions. Often less for smaller ones. That's just clearly superior to mail.
Mail theft is quite rare, at least if the organizations on each end are tight enough not to have internal mailroom-theft issues. And if the check is stolen, so what? Cancel it and send a new one. Yes, that means the check takes three weeks to receive and clear instead of two - but what's your time value of money if you're even *thinking* of eating a 2% transaction fee rather than taking 2-3 weeks to clear the transaction? Checks are slow, *slightly* inconvenient, cheap, and reliable.
It's worth pointing out that, like job applications, grant applications get a lot of bad applicants because the people that get rejected then go on to apply everywhere else, while the people that get accepted get on with doing the job and aren't just sending out constant applications.
This isn't true. Most people stop applying for jobs once they have a job. Few people stop applying for grants once they have a grant because they can always use more money.
"There wasn't as ready-made an EA infrastructure for biology, so I jury-rigged a Biology Grants Committee out of an ex-girlfriend who works in biotech, two of her friends, a guy who has good bio takes in the ACX comments section, and a cool girl I met at a party once who talked my ear off about bio research. Despite my desperation, I lucked out here. One of my ex’s friends turned out to be a semiprofessional bio grantmaker. The guy from the comments section was a bio grad student at Harvard. And the girl from the party got back to me with a bunch of detailed comments like “here’s the obscure immune cell which will cause a side effect these people aren’t expecting” or “a little-known laboratory in Kazakhstan has already investigated this problem”.
Was thinking about this, does anyone know if there's some sort of expertise swap out there? I often need to know weird things about other disciplines; I have access to a leading experts on bees, woodpeckers, javascript, and chrome book touchpad drivers. This is very useful on the occasion I need to know about one of those things. I don't know any chemists, which is very irritating when I have big chemistry questions. I assume someone out there sometimes has questions about CS pedagogy or AI that are similarly going unanswered.
You could try using the relevant StackExchange or Quora for a cheap on demand version. For learning existing content you might try Twitter or YouTube. Working at large consultancy or similar professional employer with lots of different disciplines represented will also let you play telephone very effectively.
There's a lot that's very interesting here, but one confusion I see throughout is
- are you funding a CAUSE or are you funding a PERSON?
Because (the way I see it, anyway) at this small money level, you are very much funding a person, not a cause. (You can only claim to be funding a cause independent of people when there's a whole infrastructure of multiple people involved...)
And that simplifies the problem tremendously. It doesn't matter how great the cause appears to be if the person charged with implementing it is incompetent, deluded, fraudulent, naive, or all the other various relevant pathologies. So you can immediately weed out everyone who gives you a bad vibe in their application, even if you can't quite put a finger on it.
Depending on exactly how many grants are left after you weed out
- person I simply do not believe can do the job with the money AND
- cause I do not care about enough to fund
my next filter would be, is there anyone, anyone at all, in the list who shows any proof of work ability of anything, anything at all?
This sounds harsh, so the question is what is the goal here? If the goal is "give out money and help a few people", well, credit card debt in the US and medicine for Africa. If the goal is "actually *achieve* something with high leverage", how about starting with someone who has achieved something in the past?
Now we get to the contentious area of "how many people are actually capable of doing anything whatsoever" where, uh, let's say, opinions differ widely. But I would say, based on my limited experience of either seeking a job, helping others find jobs, or helping others hire people, that the easiest thing in the world (for the actually competent) is to show proof of their competence:
- you want a computer job? Give a link to a great program you have written.
- you want a graphics design job? Show an imaginary campaign that you created.
- you want an electronics job? Show an interesting project you created.
These are not very high bars. And yet, 95% of people cannot clear them. This doesn't make them non-citizens, or inadequate drones. But it does mean that they are interchangeable, non-special people. They simply do not have the spark of drive and originality that makes them capable of just leading projects (even if "leading" means "do your own work without daily oversight and supervision") let alone creating something truly new and taking it to completion.
So, unfair though it may seem, that's what I would demand from an applicant -- *proof* that you can *achieve* the task (or at least make a good effort).
This is, IMHO, what Peter Thiel is doing with his infamous question (“What important truth do very few people agree with you on?”). The point is not the answer, it's to show that there is some degree of originality in this person's thought. There are multiple ways of getting to the same point, but every one of them boils down to
- I know you claim to be able to achieve this, and
- you may even have a credential (or reference or whatever) to that end, but
- talk is cheap, and many credentials are a lot easier to obtain than they should be, so
- SHOW ME something relevant to your supposed ability to achieve the goal.
And if the person cannot do that (believe me, you will hear an endless stream of justifications about how 'been so busy with school', 'never have time to think', 'my deprived childhood', blah blah. All of which may or may not be true, but very much do indicate that
- the person (unlike the obsessives who achieve things) does not prioritize this task above everything else, and does not think about it night and day, every night and day; and
- that they don't even see anything wrong with this (which means they will neither be able to achieve the task themselves, nor have any competence in hiring/working with those who might take up the slack).
There are now several people in or known to the rat-sphere who have done a microgrants program. Can they create a document with lessons learned so that the next person who tries it can have something to go off of?
Regarding the future version of this, I hate to be the crypto guy but I think it would be a great fit, though I would make the tokens fungible.
The way I would do this is - you make a platform for submitting proposals. Then, for the ones you’ve approved and committed to(or anyone else who wants to for that matter), we run a token sale, where people buy the proposal’s token for dollar-backed stablecoins, with a threshold so that if the project doesn’t get enough funds to execute everyone gets their money back.
To keep the math simple let’s say we’ll issue the same number of ProposalTokens as the number of dollars committed. Then at the end of a successful project, everyone with ProposalTokens can exchange them for the corresponding amount of dollars.
This way, you don’t need a single investor to cover the whole amount, it can be crowdfunded. Furthermore, there will be a market for each project’s tokens, and the price of the ProposalToken would function exactly like a prediction market for the success of the project.(And implicitly the trustworthiness of the commitment, unless you want to lock up the rewards ahead of time)
You could even make it so that the team that’s working on the proposal cannot sell all of their ProjectTokens at once, but instead have to do it in batches over time. This way, they are incentivised to keep everyone updated about their progress, so that they can raise more funds at better valuations.
The only thing is though, there would have to be some upside for the investors to lock up money for a year, some gap between what’s required to execute the project - I guess that’s the price of doing this retroactively. Or maybe just the fact that you’re able to help a charity for $0 is enough?
I’m sure this is along the lines of what you’re thinking already, just wanted to share my 2 cents
Austin from Manifold Markets here; I've been thinking about this problem from the opposite perspective, in terms of the opportunity cost that grantseekers pay to navigate the EA funding landscape (my back-of-envelope estimate for Manifold was $3k in time spent). My own proposal was to consolidate the different kinds of funding applications into a single "Common App": https://blog.austn.io/posts/proposal-a-common-app-for-ea-funding
Ideally, this platform would also allow grantmakers to better coordinate on which projects they want to fund, and allow new microgrant creators to easily get started (I saw that Scott Aaronson and Nuno Sempere both started microgrant programs patterned off of ACX).
An impact certificate model also sounds like a great idea! If Manifold's infrastructure or technical expertise would be useful, let me know (akrolsmir@gmail.com) and I'd be happy to help.
I heard this at a much higher level a few years ago at a conference. He was a CEO that sold his company for some crazy amount of money and turned to philanthropy and was shocked at how difficult it was. Turns out that if you are giving away money and want to do it well (especially at scale) it calls for an organization and operational expertise. He since started spending his time working with similar folks who wanted to set up something like this but didn't know how to get it started. I never had thought about it before but when you think about it it make sense. The incredible weight of donating $100M but doing it "right" seems high.
Yeah, it's hard to give away money if you're not standing on the street corner handing out bundles of tenners to random passers-by.
First, if you naively just give out money, every begging letter writer, con artist, and scammer is going to target you.
Second, even if you give it to a particular charity or good cause, if it's a one-man band operation it may eventually implode due to clashes of personality, the guy in charge burning out, or he decides stubbornly to do things his way which is not the best way. There are also potentials for scandals within the organisation, as here in an Irish case:
So you need some way of winnowing out the fraudsters and the inefficient, and that's a big job if you have no experience and are only in this as a source of cash.
Well I think you did a great thing. And there is enough tail uncertainty that it’s possible that you funded something albeit with a low chance of very high impact like VC. There is a lot of chance. And while eg a malaria charity has a pretty certain outcome, these grants might plausibly have higher or at least very different type of expected outcome. I think at the margin more grants probably enriches the giving ecosystem which is a positive.
Thanks for writing this. It gives me an appreciation for why so many philanthropists choose to spend their donations in ways that are perhaps less effective but a lot more predictable. If you buy a new building for the local university then it's almost certainly not the most effective way of spending that money, but at least you know what you're getting.
I guess I have the same problem with Effective Altruism that I do with utilitarianism; it's easy to say in theory "just do whatever creates the largest number of utils, duh" but this is a heuristic you can't possibly apply in practice, so you're back to square one.
> I guess I have the same problem with Effective Altruism that I do with utilitarianism; it's easy to say in theory "just do whatever creates the largest number of utils, duh" but this is a heuristic you can't possibly apply in practice
On the other hand, she (apparently) got ghosted/forgotten, then got over it and still applied for the program, just to be rejected again - for being ghosted. I don't think you need to any ulterior motives to explain why she might have been a tad pissed.
Yeah, that could also be the case. It depends on details which we aren't, and shouldn't be, privy to.
In retrospect, that comment of mine was in bad taste, given that both parties involved probably read this. I think I was in too dark of a place to be commenting, as evidenced by my other comment on this post. I think I'll edit it into nothingness, but leave this up as a warning sign to myself (and others) about where my brain can take me.
I am the referenced party (substack name is a pseudonym) and all's well on my end! Scott masterfully told the story for humor (I laughed out loud), but there's a bunch of non-included detail to it that made the whole thing significantly more innocuous for both parties.
I'm afraid I have a contrarian reflex where my mind often tries to find exceptions or alternate perspectives for stuff I see. And this combines with a defensive mechanism I developed to deal with a bad relationship, where I try to figure out the worst possible light in which some action could be taken. The result, these days, is dark humor, at least when it's funny. Otherwise it's just dark. And I probably shouldn't spring it on unsuspecting people. It's too easy for stuff to be taken the wrong way.
Let the big boys fund that stuff, no individual has enough bandwidth. If you want to beat the market, such as malaria etc, then you have to get directly involved. Pick someone that you connect with and aligns with your values, dig in deep with them, and make sure they don't get hung up on something you can prevent. Sometimes that will be money for the right thing, but the biggest value will come from holding back when you see that something would be counter productive or wasteful.
What you're doing seems like a good idea at first but can't really be better than randomly handing money out to winos on the corner.
I know it was probably heartbreaking in the moment, but I burst out laughing at "This remains the most stone-cold rejection I have ever gotten." Of all the consequences of a grants program I wouldn't have expected that one.
I’m really excited about the prediction markets-themed grant-making proposal! Wouldn’t it be more fun to open the initial round of investment (the owning of the impact certificates) to other ACX plebs like me who don’t have $250,000?
Really liked your idea for funding grant proposals. Been reading up on prediction markets since discovering them here. Seems like there’s a need to do a dance for regulators to say “See?!?! We’re not gambling! We are just having different opinions about future events, recording those opinions, with different rewards for success!” Wondered if there was a way to do multi-year trading on those and maybe something to turn on a trickle of funding and build up as trust increases (based on -and this will do a lot of work- “some kind” of review).
I’ve often wondered if something similar to this could be done on a smaller funding scale, ie the city of Los Angeles does this for odd jobs and over time we fund and trust people to deliver sandwiches to the homeless or fill in pot-holes or even have the job of finding new odd jobs. That seems like a cheaper way to administer a city and a happier way for people to find something to do when they don’t want to be chained to a company.
- This was awesome for you to do and publicize, sorry you didn't enjoy it
- It's your money, you can do what you want with it
- You really do have the advantages you describe in section VI
- Doing your own grants as opposed to just donating everything to established EA orgs provides valuable hedging / information to the ecosystem
... but can we talk about Grant B?
ACX Grants gave an established academic $60k to jet around the world writing a book on a super trendy, politicized, non-quantitative subject.
One, that's not a long-shot project; it's a project that's not even trying. Even if there was One Weird Trick to Smash Patriarchy (which there isn't; that's "murderism" talking) this isn't the kind of work that would find it.
Two, this is a clear case where ACX Grants have zero comparative advantage. "Harvard degree" has nothing on how legible this recipient is to mainstream grant-making institutions, _and_ Tyler Cowen already wrote her a check.
Look, admittedly I also dislike that the recipient considers my presence as male in tech to be ipso facto problematic (https://www.draliceevans.com/post/smash-the-fraternity) but that isn't where my objections are coming from. I'm just disappointed that the ACX Grants program gave such a large chunk of funding to such a lackluster cause. And given the thought process described in this article I'm confused about how it happened, unless the decision was just outsourced to Tyler Cowen.
Yeah, that's the one that makes me raise my eyebrows. Scott says that this person is working on the problem of gender equality in developing countries, but if I go by something they've already worked on, they've cracked it.
The short answer is "industrialisation".
Slightly more developed answer: if it's a village of poor goat herders in the mountains in the back of Nowherestan, then they will have rigid gender roles and traditional ways of life that mean women stay in the house, are obedient to the men of the family, and the men do all the public life.
If Amir from the goat herd village goes to the nearest Big City to get a job and earn money (because even traditional rural communities are not immune from the stresses of modernity), it's very damn likely Amir is going to run into women who are working outside the home for the first time in his life. This is going to have a *big* effect on his world-view (see rigid gender roles and traditional ways of life).
Also, if Amir meets a girl from his home village or neighbouring village, because they've both moved to the Big City to work and make money, that he wants to marry then due to factors such as the high cost of living (relative to poor goat herder village), the girl *has* to work outside the home, else they'll both end up living in a cardboard box in an alley because they can't afford anything better on Amir's wages (he's a poor goat herder, he's not going to be working high-paying, high-value creating, jobs).
This has knock-on effects such as "we can't afford to have kids/ten kids like we would do back home", so things like birth control and abortion are now part of modern Big City living life. Education, so that everyone can get better jobs, or at least Amir's kids (if he stays in the city, or if he moves back home because his kids might need to move to the city later) can get better jobs. This will include the daughters. Everything else then flows from that, plus the Big City is much, much, *much* more permeable to "modern/Western/Enlightenment/call 'em what you like" values such as 'gender equality', all in the name of "making money". (Nobody does things like this out of the goodness of their heart; industrialisation needs warm bodies to work in the factories, and women are as good as men to feed the needs of 'MelonPhones needs chip manufactories at low costs').
You need women in the workforce, not at home being wives and mothers in the traditional way. You need women with some degree of education. The needs of capitalism mean that the old, rigid, traditional ways get chipped away (or power-washed away). Amir may never go home, or he may go home as a rich (by the standards of his village) man who takes a wife and has the traditional life - but he's been changed by his time in the city, his expectations have been changed, and when his kids are in their turn heading off to the city, the changes will go further with them - and so on. Eventually, small backwards goat herder village values lose out to the values of modernity, even in the small backwards goat herder villages.
You recently wrote a post about why your posts aren't as good anymore. One of your reasons is that you're focusing too much on the community stuff.
This is a community stuff post.
I think if I assign some very arbitrary rating to how interesting one of your posts is, and some very arbitrary rating to how important it is, and multiply, that this will come out as the very best post you've ever written. You've lately done big work that isn't this cool, you've written cool stuff that isn't this big, but in terms of how good a thing you have going for? This is something great.
About second order consequences: saying that you gave X$ against malaria is nice but is, at least for me, easily forgotten, and probably doesn't lead to blog posts like this. This grant program, however, is itself a form of publicity. It shows that you and the people around you are ready to invest a lot of time in those kind of things, which, in my opinion, makes you appear way more serious about everything "effective altruism". Of course, evaluation that will be very hard (or will it? Maybe a poll could be a start), but I think there's something there.
While reading over this, I had the thought, maybe one that you already had in the process of running the grants program, that if I were running one, I think I would aim to prioritize funding charities and programs which will continue to exist and have some plausible means of making realistic assessments of how much good they're accomplishing over time. If you fund 50 charities, and only three of them turn out to be much good, that might still end up leading to better outcomes than just donating all that money to the Against Malaria Foundation, *if* the process allows you to discover that those charities are particularly worthwhile, so that you and other people can direct more funding to them later.
I think this encapsulates the same idea you expressed in your essay on Diversity Libertarianism. Trying more things, rather than a few known-to-be-good things, can be preferable if you have a process to iterate on the things you try which turn out to be particularly good. But, it's a lot less likely to be if you don't.
Hmmm...I was thinking kind of the opposite. Part of my advantage over big foundations is that I'm faster and nimbler, which suggests being more willing to fund one-time opportunities.
I do intend to email everyone in a year and ask them what happened, and write about anything I learn from this process.
If you think that the one-off opportunities wouldn't be funded by other foundations, but the iteratable (iterable?) ones would, then that's one way to take advantage of your position, but I'd need a much higher level of confidence to donate to causes that can't be iterated on.
Is giving tens of thousands of dollars to an academic to take a sabbatical going to produce several lives saved from malaria worth of value, or several people relieved of debt? It might, but my feeling is, it's *probably* not going to have more impact than the highest value charities. But the big problem is that if it does, I can't generalize from that that it'd be valuable to fund sabbaticals to other academics; the situation is too idiosyncratic.
If there's a probability distribution where there's a 90% chance that money given to a cause does less good than money given to the highest-rated charities, then a significant part of the remaining 10% is probably taken up by the possibility of it doing a little bit better than the best-rated charities, but not a lot. So, it's probably not going to exceed the expected value of the best-rated charities with a one-time donation.
But, if it's an opportunity you can iterate on, a 10% chance of finding something that could be even higher-impact than the best-rated charities, which people could then direct more money to, could give that one-time donation *very* high expected value. It doesn't have to turn out to be much better than the current best charities for the numbers to turn out favorably.
I continue to be amused that the difficulty of operations keeps surprising you. You were surprised that the Meetups Everywhere project turned into such a recordkeeping headache. You were surprised at the logistical complications of running the Adversarial Essay and Book Review contests. And now I hear you were surprised by the challenges of paying thousands of dollars to dozens of parties. I’m glad in all these cases that you find someone after-the-fact to help rescue you from the administrative quicksand, but I’d think by now you’d be better able to foresee the need before you start these projects!
Ever since I read John Salvatier's http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail my default expectation for stuff I don't have prior experience in has been "this will be a lot more involved than I think in all sorts of unforeseen ways", and this both puts me into the right frame of mind (long slog, not 'quick and easy') and I'm occasionally pleasantly surprised when it turns out wrong
If he needed any defense: that things are going to be more complicated than one thinks in the first place (especially the first time you do it) is always true.
And I think he laid out quite a few arguments in his post why one should do it (start) anyway.
(Don't want to spam but just in case this gets missed in another comment: connecting Scott's retroactive grant with the following seems very beneficial: the xprize foundation)
I think my main mistake here was underestimating the number of applicants. I made a bet with Oliver Habryka that I'd get fewer than (I think the number was) 60. I ended up getting 656. I don't know why I was so wrong. I implicitly figured this seemed harder than book reviews and I got about 100 book review entries. But maybe the lure of "easy" money attracted people who otherwise wouldn't participate in ACX stuff. Certainly some applicants didn't seem to understand what ACX was.
I will note that although I obviously had less investment in the subject, I was also surprised when I heard you got that many applicants. I thought that seemed like a lot more people than I'd expect to think they could make a reasonable case for you to give them money for stuff.
ETA: I think I would have bet against you for <60 though. I didn't give a lot of thought in advance to how many applicants I expected you to get, but when I try to think of what number of applicants would have been least surprising to me, I think that would have been somewhere in the range of 100-150.
maybe you got 656 applicants because no matter how badly the application was written, actually not procrastinating but writing a proposal and hitting send was a form of therapy for the applicants? Some ideas are nice to think of, but actually have a field to express the idea, see what the idea looks like written down, and know there was low risk but high potential return to see if the idea-seed catches or not, in itself could have been a benefit to the applicant...
AFAICT "constantly underestimates how difficult things will be" is a common attribute of people who actually get things done (often by delegating the hard work to someone who can't say no, admittedly). Those of us who predict in advance that things will be difficult often give up before we start.
I'm a CPA, and you should probably talk to a tax lawyer if you haven't already. The US has a Federal estate, gift, and trust tax that may kick in if you give more than a couple million over your lifetime. It may be yet another continent of angry cannibals.
As an academic scientist with a lot of (mostly negative, with the remainder mostly puzzling) experiences applying for federal grants, there's a lot of interest here. Federal agencies tend not to describe their grant-reviewing experiences with such honesty.
I have a few remarks on lesson 4:
> and then my grants program would get really famous for funding such a great thing
I know some programs that veered very hard in this direction in an ostensible attempt to become more established and build their reputations. I caution against this, because people can tell when you're just going for name association rather than actually doing anything worthwhile.
> Or suppose some promising young college kid asks you for a grant to pursue their cool project.
I think Tyler would also recommend considering the marginal impact of your grant dollars. Giving somebody their first chance has a much larger potential upside than funding an existing effort.
Finally, one point that isn't emphasized here: especially when it comes to basic research, being afraid of "failure" (in the sense of a project not being successful) is counterproductive. If anything, basic research grants should be targeting a certain failure percentage, or else they won't fund enough novel ideas with potentially huge long-term payoffs. (This Works in Progress essay discusses a related idea, "Demanding null results": https://www.worksinprogress.co/issue/escaping-sciences-paradox/.) Of course, for some of the AI or x-risk proposals here, "failure" could have significant negative externalities, which is different than not finding a new drug and has to be handled more carefully.
>Church has seven zillion grad students, and is extremely nice, and is bad at saying no to people, and so half the biology startups in the world are advised by him
In fact, the conflict of interest section on papers he's on would be too long, so he made a webpage to list them and just links to that.
I have yet to encounter a professional opportunity where I haven't been working with someone from the Church lab.
Even at my partner's defense earlier this week- ended with a probe independent spatial seq project done with... someone from the Church lab (Gu lab at IPD/super cool)
Here's a third option between starting a micro grants program and donating to an EA powerhouse 501c3. Go to a public school with an impoverished population. Ask them what they need. Find a local 501c3 that can fulfill that need. Put them together and ask for a plan. if you have confidence in the plan, the people, and the partnership, fund a pilot. If the pilot works, go from there. You will have created something that BUT FOR you would not have happened. Of course, I'm making it sound easier than it really is. And you need to be choosy and lucky. But inspired principals and inspired executive directors of smaller, local charities are out there. Waiting to be connected and a need funded.
I work in grant administration and these are some of my favorite projects to work on! So heartwarming when things go well, and even when it doesn't work out long-term there's usually at least some positive impact.
One thing that stands out to me is how much you benefited from connections and knowing people.
Makes me wonder if one of the most effective things one can do is simply to promote schmoozing amoung EA types. I know that since Oxford stopped their EA lectures I don't know where to go in academia to do that and I wonder if it's a broader problem.
If I had the time/management skills I'd submit a grant request somewhere just to do a continuing EA lectures series in some fancy philosophy/CS/math department.
This has long been seen as a really important factor in all sorts of non-linear gains from cooperation, eg in the Silicon Valley VC network, but it's not obvious how to encourage it artificially because you immediately run into bad actors and bad incentives.
I think you might be able to do this kind of thing "manually" as an individual but that just sort of brings you back to the original problem.
This is a bit offtopic but I'm thinking that it might be worth it to encourage more self-favoratism amoung rationalists (jobs, networking etc). Yes, there are costs to this but it seems like something all really effective social movements seem to do and might be worth it.
Having recently started making donations – not microgrants, just donating to established organizations, but attempting to identify underfunded groups where incremental dollars will make the most difference – this is so spot on.
The question I've especially been wrestling with is how to understand whether a particular organization actually needs your dollars, and how many of them, considering that they will also be raising from many other donors. It seems like a fundamentally intractable problem when a large number of donors / funding sources are attempting to make independent decisions (which is mostly the only option available under the current system). I wrote up some thoughts about the problem here, would love feedback: https://climateer.substack.com/p/philanthropy.
> The problem is: this grants program could be the most important thing I’ll ever do.
No, it's not. The chances that your $60K are going to be the difference between utopia and a thousand years of darkness are negligible. In terms of strict value for the money, you'd be better of finding six random hobos and giving them $10K each. However, this is the classic tragedy of the commons: giving $60K to hobos is the rational choice, but if everyone took the rational choice, we'd still be stuck trading slaves for coconuts, instead of flying drones on Mars. So, yes, what you do is important... just not so important that you should dedicate your entire life or reputation or fortune to it; nor is it important enough to endlessly obsess over. Half-assing the job is probably the right move.
I mentioned it in a comment yesterday on the polymarket post. Seems even more relevant now. I think connecting the two concepts and persons would be very beneficial
Re: Corporate Babble. Once I was reading a forum, and someone posted a thread asking for feedback on their pitch document for something related to the forum's topic. I took a brief look and told them that it looked like buzzword soup instead of actual information.
I expected them to be upset with me. I figured that someone does corporate babble because either:
(A) they believe it works better than actually explaining stuff, in which case their first thought will be something like "that's on purpose, dumbass", or
(B) they don't actually KNOW the information that they are pretending to explain, in which case they will be embarrassed to have been caught in what is effectively an act of fraud, and will likely try to bluster their way out
So I was rather gobsmacked when they replied with something like "yeah, that's a fair criticism; how can I improve on that front?"
(And I had no freaking clue what to tell them! I don't have any models for how corporate babble happens as a locally-correctable error! To this day, I'm honestly not sure whether what they really meant was "how can I *disguise* that better?" I guess I probably should have asked follow-up questions at the time...)
My theory is that a person in this situation has been caught in the "learning for the test" loop. I'd expect that, if they were put in a situation where they had to coordinate with others to produce some concrete result, they'd figure it out. Some situation where it doesn't matter what anybody says -- it matters if the thing actually works.
I'm a web developer, so I encounter very few buzzwords. Because if you hand a web developer a document full of words, and they don't know one of them, they'll ask, "What does this mean?" and "What are the acceptance criteria?" ie, what measurable result should I deliver to you?
If I get a word like "synergy" and nobody gives me any measurable acceptance criteria, I'll put a check box on my to do list that says "synergize", mark it done and move on to real work. Part of what more senior web developers have to do is train and assist the product/design/marketing/sales people around them to render requirements concretely.
So I think working my job for a year or so would be a pretty effective cure, if somebody didn't know how to tell if some text was explaining a thing concretely or not.
Having received a grant, reading this made me really anxious. Internally comparing the outcome of your project (even in the best case scenario) with X lives saved surely creates pressure. Idk whether it is right to feel this way and actually all of one`s expenses should be weighed in this way (eg, buying a new car or saving 4 lives?) as to put things into perspective or whether this in the end creates a dysfunctional amount of anxiety, actually lowering your chances of success. Same reasoning should apply for the grant selection process I suppose.
Scott writes: “How can big foundations be short of good opportunities when the world is so full of problems? This remains kind of mysterious to me”
I believe I can solve that one for you/add some reasons besides the one you give.
My reference is to government development cooperation, not charity-based altruism, but the problems are bound to be similar. (Actually, I think Scott knowns the reasons very well, but acts ignorant to come across as modest and not an arrogant know-it-all. Which is a very sympathetic character trait.)
If you want to solve the problems in the world, you run into two practical problems:
1) The people with the largest problems in the world do not have any organisations, or anything else, you can “attach your money” to. That is one of their problems. So you have to rely on middlemen.
2) Money aimed at solving other people’s problems where you do not want anything tangible in return, attracts a lot of fake middlemen that mimic the behaviour of real middlemen. You try to screen the middlemen to find the real ones, but this is a signals arms race, where fake middlemen are incentivized to improve their mimicking behaviour as you develop better screening abilities. It’s a principal-agent problem. Everything is. (As your blog post illustrates.)
To illustrate. You want to solve the problems of the people with the most problems in the world? They are actually easy to identify. 1) go to a low-income country. 2) Go to a rural & remote area in the country. 3) Locate a minority ethnic group in that area. 4) Locate the single mothers/abandoned wives/widows in that ethnic group. 5) Locate the ones with daughters. 6) Narrow in on the daughters with disabilities. Those daughters are the people with the largest problems in the world.
So how do you reach them to help them solve their problems, assuming they are still alive? (Putting a parenthesis, for the sake of argument, around the fact that they are seldom around for very long, which is one of their problems.)
You cannot reach then directly, since one of their problems is they have no way you can reach them directly. You have to rely on locals who know the community, preferably someone that are part of it. Usually, it is a problem that the community itself is so destitute they have no-one you can reach, so you have to rely on some not-fully-local, or more likely a whole chain of further-and-further-away middlemen from the daughters themselves, i-.e. a chain of intermediate agents, each with their own screening-and-signalling game attached. They are usually NGOs and CBOs of some sort.
It is enough that there is a bad apple in one of the links in that chain that your screening abilities fail to detect, for the whole thing to go wrong.
Oh, but you can get around that by relying on indicators a la the GAVI alliance, can’t you? Well, when a measure becomes a target it invites gaming as you point out, and you are back in the principal-agent signalling arms race.
Maybe the Effective Altruism people have gotten around these issues. If so, I would love to hear how! And I do not mean that cynically. I would really love links to websites, or articles, dealing practically with this problem. Since the government cooperation agency I am familiar with has not, despite probably having vastly more resources to do detection work than an average charity (apart from the biggest US ones).
For the record, I am not in the development cooperation business myself. (Apart from a brief stint in the late 1990s when I was assigned by my government to advice an Asian country grappling with how to set up efficient welfare policies in the aftermath of the Asian economic crisis.) My knowledge stems from teaching stuff like this in a master course, plus gossip over the years from friends who make a living in the government cooperation business. Friends who introduced me to acronyms such as MONGOs (My Own Non-Governmental Organisation) and GONGOs (Government-Owned Non-Governmental Organizations).
I meant that you know the principal-agent problems related to identifying good projects for funding. Which your blog post implicitly illustrates in good and also entertaining ways.
..my point was that these problems equally bedevil large organisations, with more money, who try to identify good projects for funding. Ok, they are bigger and may hence have more resources for screening, but they are also more exposed to fakers, because they are juicier targets.
The problem is larger if you want to fund projects in places where you do not have local knowledge. Unfortunately, most people with large problems live in other countries than most people who have money that can fix problems. This goes for large donors as well as small donors. Both usually lack local knowledge.
...Bear in mind that you usually need local knowledge not only of a country, but of particular places and communities inside countries. The more problems they have in a place (daughters with disabilities of single mothers in rural ethnic minorities as an example), the more you would like to fund something that helps, but at the same time the less likely you are to have local knowledge yourself.
This certainly does not mean that offering help is impossible. For example, donors often rely on reputations to screen those local NGOs and CBOs they can trust. Because they can rationally have reason to believe that the organisation they are dealing with, provided it has carried the costs of being honest in previous deals with the likes of you or your organisation, will not take the risk of losing its entire reputation by cheating you. This stems from the fact that reputations are built up gradually (by abstaining from cheating), but can be lost in one stroke.
Trusting Doctors without Borders not to cheat those who donate to them is probably a sound strategy, for this and related reasons. However, there are many NGOs and CBOs (most of them I would argue) whose work is much more difficult to evaluate. Perhaps particularly local ones, but that is hard to say.
> That is, funders give them lots of money, they’ve already funded most of the charities they think are good up to the level those charities can easily absorb, and now they’re waiting for new people to start new good charities so they can fund those too
Meanwhile, funding open source projects used by almost the whole industry is apparently still unattainable rocket science.
Is there a simple explanation for how there's apparently a glut of money in charity, and Against Malaria Foundation can still save a life for $5000, a claim which I've been seeing for many years now? Surely there's no shortage of billionaires willing to do that, so why doesn't the price to save a life go up?
> five paragraphs explaining why depression was a very serious disease, then a sentence or two saying they were thinking of fighting it with some kind of web app or something.
Sounds like most grant proposals I've seen people write, and get approved.
I think second-order effects of this process are worth mentioning. A lot of VCs and rich people read this blog, so if you inspire some of them to do their own microgrants, you can surely move orders of magnitude more money.
It also serves to start a discussion in these circles, helping others who need it to become better at this kind of thing and helping you become better at this too, so that next time you can do an even better job. I think a lot of people agreed with your choice of charities, as most are awesome and inspiring. Hope we get updates later too on how it's going. People have different values and I imagine it's way more exciting to fund moonshots than the "boring" choice of giving it all to AMF.
Also, if doing it once was worth it despite all the headaches involved, doing it a second time should be way easier, especially if you get help and prepare.
I also expect a second microgrants program to get fewer applicants, considering most were already evaluated in the first round, so it shouldn't be that hard second time.
Still, thanks for doing this. Hope some of them work out.
V here seems to be a much bigger condemnation of the American banking system than you might have realised.
Here in the UK, I can transfer to any other UK bank account, given the sort code (a six-digit numeric code that represents the bank that the account is located in) and the account number (which are always eight-digit numbers).
There are three different systems:
BACS, which is almost free (a standard per-transaction fee of 35p), has no upper limit on the amount transferred, and takes three days.
CHAPS, which is same-day guaranteed, has no upper limit on the amount transferred, but is much more expensive (£25-35 per transaction).
FPS, which is same-day but not guaranteed (it falls back to BACS in some rare situations), has an upper limit which was £150,000 until recently but is now £1,000,000 (pandemic change, though now made permanent), and has similarly small fees to BACS (45p per transaction).
Personal bank accounts normally have free BACS/FPS transfers (ie the bank eats the fees, though they get bulk rates, so they are paying a lot less), business accounts normally pay modest fees. I use BACS and FPS all the time, e.g. to transfer money to friends much like Americans use Venmo. I have used CHAPS exactly once, which was to transfer the money from my mortgage to the vendor when I bought my house. It tends to only be used for very large transactions where timing is very important (like buying a house).
Doing a series of large transactions like paying out grants would require a business account, would require paying fees - but modest ones - and would require contacting the bank so that a large number of payment doesn't trigger their fraud alerting. But the other hassles you had to deal with would just not arise.
American banking is notorious (to Europeans) as being backward. We almost never use cheques, Venmo is pointless here because you can just use BACS (or the other equivalent domestic systems in other countries), or SWIFT for international transfers. You just need their account details / SWIFT code and you transfer directly to a bank account using the app on your phone.
Even for bigger transactions - I work for a big enough organisation that we have recently hired someone to work full-time on sending out and receiving money in the US because the US banking system is so difficult to deal with.
Just to flag up why this system was successful: it was originally founded by 16 large banks in the 1980s. That covered over 90% of UK bank accounts from foundation.
Other banks either join the system as full members or they contract with a big bank to run transactions through that big bank.
In 1996, the "Truck Acts" (the laws that banned companies paying in scrip that could only be spent at the company store) were amended so that employers could refuse to pay in cash provided they would offer to pay via BACS. The result was that the few small banks that didn't support BACS immediately moved to do so, because their customers wouldn't be able to get paid.
Since there's so much funding available for AI-related ventures, this got me thinking that some of the ACX grant applications will probably be more in need of staffing than grants at this point. If you're looking for an employee for your AI research organization, I want to work for you! I'm finishing grad school this year and am currently looking for a job doing data science, ML engineering, AI safety research, or anything related. I'm also potentially interested in helping with any projects of the sort that would be pitched as an ACX grant application. I think I get notified of replies here but I can also be reached by email at robirahman@g.harvard.edu. (Sorry in advance if this kind of comment is only allowed on classified threads; no offense taken if it gets removed.)
ACX grants are relatively small, so recipients probably have the manpower they need already. And it sounds like Scott prefers not to fund AI work, as other orgs already specialize in that.
Is there a plan to measure the success of each of the grants to determine which ones were successful, moderately successful, unsuccessful? I'm unlikely to run a large-scale grant program, but I am interested in the problem of how to effectively improve the lives of those in my immediate community, as well as those in the broader country/world through small grants.
Perhaps we're not aligned on this, but maybe there are others who read this blog who would like to know how to best distribute their surplus in a way that will persist after the initial infusion of cash is given. Where can my extra dollar do the most good for others? I feel like paying off a credit card debt isn't a good use for this, since it feels about as effective as chasing rats out of the house. It looks good for a minute, but they'll just come in around the back anyway. It's not solving a problem, so much as pushing the problem back a few months. I'm really looking to answer the question, what can I do that will change things - even if in a limited way?
I guess the answer to the question is "probably not a grants program", because a grant would require a deeper knowledge of the subject and circumstances than I can give.
I think your experience here outlines a major objection to the Rationality movement - the information needed to calculate what the "best thing" to do isn't available. Or, at least, isn't available at the cost the difference in information would make. That is, if you had to choose between paying market-rate for the free specialist consulting you got or instead having to do without, the cost you would have to pay would have been larger than the difference the specialist knowledge made in terms of effectiveness of grant distribution. So (benefits of uninformed selection) > (benefits of informed selection + cost of specialist knowledge). This works when you are able to get specialist knowledge for free.
For the typical person dealing with typical decisions, the cost of specialist knowledge will exceed the cost of the benefits of informed selection. Meaning that uninformed decision-making will be the rational choice.
An interesting project would be to create a crowdsourced approach to grant funding clearing. Imagine a site where someone seeking a grant could upload their proposal and then pre-approved volunteers with relevant experience in the relevant domain could review it for the obvious problems. Amazon's Mechanical Turk is almost what you want for this, notwithstanding the initial credentialing problem.
Many years ago, I read a piece by Bill Gates in the NYT, about what he had learned in 5 years in his non-profit work aimed at eradicating malaria etc. He basically said, he had failed.
It was humble, transparent and introspective. I tried to look but could not find the essay. I think there is much to learn there, on how to figure out where to invest your money, to help people the most.
It seems to be surprisingly hard to give responsibly.
> I take it all back. The crypto future can’t come soon enough. Sending money is terrible.
Why not go the "crypto" (that is to say, cryptocurrency) route? Buying (e.g.) bitcoin is not generally hard (for the amounts I tried so far, which are four orders of magnitude smaller), and once you have it in a wallet you actually control (i.e. have the private key for), there are no practical limitations on how much you can transfer.
I can think of a few possible reasons (all of them with a question mark):
* Buying bitcoin for millions of dollars is hard to do in the US
* Doing large BTC transactions will get the IRS on you or get you into trouble with money laundering laws
* The recipients were reluctant to accept bitcoin for legal (see above) or organisational ("we are a respectable university department, we don't do bitcoin") reasons
* Computer security concerns
* Usability concerns (e.g. sending grants to invalid addresses due to typos)
Of the ACX ++ grants, it seems that a total of three applications include a bitcoin address, which I would consider convenient for donating smaller amounts (as opposed to contacting strangers using a publicly stated email address and just hoping their mail account opsec is good enough and that they won't guilt-trip you into donating more than you wanted to or anything).
<i>(2) Most people are terrible, terrible, TERRIBLE grantwriters</i>
I do grant writing for nonprofits, public agencies, and some research-based businesses: http://www.seliger.com. The fact you've noticed is why we have a business. Effective grant writing is extremely hard, and most people aren't good at it. ACX grants are relatively small, and I often work on projects in the $500k - $5 million range; we work on a flat-fee basis, typically in the $5,000 – $15,000 range.
Many people who are great at what they do are terrible at grant writing.
are there not fees with crypto transactions? both the fees of the actual transaction, plus the fees of converting dollars into, then out of, the currency?
I think you're misusing "comparative advantage" in a way that may be important.
Comparative advantage is relevant when comparing two resources that can't be directly exchanged for each other but can be used to produce the same outputs. In that case, even if one resource is less productive when producing either output, there are gains to trade as long as the relative productivities of the resources differ. In the classic example: I can use an hour of my time to make one fusili sculpture or to make two cucumber pizzas. You can use an hour of your time to make two fusili sculptures or three cucumber pizzas. Your time is more productive than mine at everything (absolute advantage), but I only give up half a fusili sculpture by making a pizza, whereas you give up 2/3 of a fusili scupture, so I have a comparative advantage in pizza making.
In this case, the crucial resource is money, and it's directly exchangable. If you think that givewell is better at grantmaking than you, you can give them your money. If Scott Aaronson is almost as good as Givewell at identifying STEM education grants, and much worse at identifying global health grants, he has a comparative advantage in STEM relative to Givewell. But he'd still be better off giving his money to givewell than making grants himself, even if all of it was for stem education.
Maybe the equivalent is to imagine that you could buy a $200 blender that could puree a soup in 10 minutes or crush a pound of ice in 15 minutes, or a $200 blender that could puree a soup in 30 minutes or crush a pound of ice in 20 minutes. The second blender has a comparative advantage in ice crushing, but since it's absolutely worse at both tasks, you should never buy it.
The lesson here is: only make your own grants if there's nobody who you can give the money too that could make better use of it than you can, in an absolute sense.
I think he should have just used "advantage," or even "absolute advantage." I'd be fine with "better ability" as well. The rule I would argue for would be something like: "Make your own grants if you think that you can direct money to solve a problem more effectively than anyone else you can give money to." The problem is that he explicitly contrasts his use of "comparative advantage" with "absolute advantage."
>And that's an easy one. What about B? If the professor figures out important things about what influences gender norms, maybe we can subtly put our finger on the scale.
She won't. She's a woke loony and her demented ideology will prevent her from ever producing any meaningful insight into the world.
Maybe there are heritable population differences that explain most of the variation. Maybe it's a result of industrialization, and heritable factors affect whether/when a population can undergo industrialization. Maybe these things are true, maybe they aren't, but Alice Evans will never in a million years discover that they're true because her ideology prevents her from even thinking about the world in this way to look there. And if somehow this leaps out at her, she's sure as hell never going to put that in a book.
I found this whole post really affecting, and it has rejuvenated my respect and interest in ACX. But it also left me with two conflicting realizations.
It helped me understand why I've always been wary of utilitarianism as a moral code and effective altruism as a practice. Firstly, because it's kind of counter to the human moral instinct - which is just to do what you think right at the time. And secondly because it's so bloody difficult - almost impossible in fact - to sort out all the confounding variables and unknown cascading second order effects.
But it also convinced me that if I was going to give a lot of money to a grant-giver I'd want it to be Scott. That level of genuine concern combined with serious questioning is pretty rare. And I think good can come out of this if it can come out of anything.
A vague thought... I wonder how many of these grant proposals basically reduce to "Give one or more people a basic income guarantee for a while, so they can buy food while they focus on whatever they naturally want to work on." And would therefore be moot in a world in which UBI exists.
If that's a real class of grant proposals, could one build a useful heuristic around that class? Like, if solving a problem requires money for reasons other than feeding the people who are already enthusiastic about working on it, does that somehow make a targeted grant more beneficial than otherwise?
I work in grants (both proposals and post-award administration) and have some suggestions on how to cut down on the garbage applications. The application form is no longer viewable so I couldn't check - you may have been doing some or all of these things already.
* Require an itemized budget as part of the application.
* Ask applicants to answer several specific questions. Instead of just "why should we give you $," break it into sub-questions, and place a word limit on each. Common things to ask about are why the topic area is important, what expertise the applicant brings, what makes the project uniquely likely to make a difference, and how the money will be spent (this is often a separate document called a budget justification). This will cut down on needing to read through pages on why depression is bad followed by "we're gonna make an app."
* If there are certain things you will definitely not fund, state those up front. Talk to a lawyer about this beforehand as there are some weird rules around international giving in particular.
* Have applicants indicate whether they are applying as an individual or an organization. If as an organization, require proof that the organization actually exists.
* If you plan on imposing any conditions on the funding, state that up front. Common ones include not paying until you receive an itemized invoice of expenses incurred, requiring progress reports and/or a final report at the end of the grant period, and limiting the recipient's ability to re-allocate the funds. That said, more conditions = more work for you, so for this type of program it's totally reasonable to just throw cash at people with no strings attached.
Why aren't grant proposals public? If all grants were put on a public website, like the EA forum, and funders could look through them, and anyone could contribute their expertise, what would be the downside?
Regarding EA having too much money and not enough ideas to fund -
There are a lot of smart people who I’m sure would love to dedicate their minds / career / time to solving tough EA adjacent problems. However, if you’re in a top tier job / career you don’t have the time / mental energy to get close to solving or even thinking about solving some big problems. Based on the too much money thing it seems like “earning to give” does not have the same value it maybe use to? It would be cool to see some sort of EA charity start up incubator that would pay good salaries and could lure talented people away from top tier jobs and give them time to tinker / build / think up solutions for solving problems that meet EA criteria. Would be classed as a high risk fund from an EA perspective but how beneficial would it be to discover / build another charity that has impact on par with the against malaria foundation?
Hey! I was thinking about “Because you have a comparative advantage in soliciting proposals”
I’ve also long thought about diversity and representation problems in EA/entrepreneurship/funding/many of the adjacent spaces.
So I had this idea: what if you tried to source proposals from interesting people throughout the world, the developing world in particular. The goal would be to find suitable candidates that had never even considered the possibility of a grant. The invitation could be as simple as guaranteeing that their proposal would get read.
There’s a couple ways you could do this, here’s 2 that come up for me:
1. Ask people who have done a lot of traveling “have you met anyone while traveling that are XYZ?” Where XYZ are factors that might indicate your values. I personally might say something like “unassuming/humble, community oriented, generous, smart, potential for impact”. In my travels I’ve met several such people, like https://notmadyet.com/ and http://gogreatergood.com (both of whom are EA aligned!), and more. There’s plenty of more well known travel bloggers too.
2. Ask people who have spent real time abroad doing humanitarian work the same question. I first thought of people that have taught English abroad, done habitat for humanity, etc. If you did a survey of former peace corp workers of “name up to 3 people you thought could do a lot with this grant”, any names that came up more than once would likely be pretty interesting candidates, in my opinion.
In this model, there’s a middle source layer. You’re relying on their judgment to be reasonably sound, then to have spent enough time somewhere to have more than a passing understanding of the landscape, and for them to roughly values aligned.
There’s near infinite possible problems with this. Yet given the typical value of diversity, deeply connecting with “users”/“customers”/beneficiaries of ones work for feedback, and context awareness (I think you could reasonably call these candidates “subject matter experts” in the issues facing their local communities, at least), I think this would be a stream worth at least experimenting with.
I’m happy to do a bunch of this work, or at least get it started, if you (or another grant maker) is interested.
I think there is an explore/exploit trade-off things going on with charitable giving. I think it is a reasonable approach to put a percentage of your money into 'explore' activities and a percentage into 'exploit' ones - nets to prevent malaria obviously being in the latter category for example, whereas it feels like the strength of a grants programme like this is to support 'explore' activities, some of which you are going to expect to fail.
I've been a referee on small grant proposals and it is hard though - everything you say resonates!
“I don’t consider us to still be dating” - Brutal!
What an interesting job. Never having considered it before, I would look at their previous work and see how much of the grants were wasted on excessive management fees and bureaucracy instead of actually going on what they were intended.
Unfortunately, that's a classic mistake - see https://80000hours.org/2011/11/it-is-effectiveness-not-overhead-that-matters/
Interesting thanks! I told you I'm a newbie, learning fast though!
I think you should ask people the following question, before funding them. "Why shouldn't I directly give my cash to the victims you want to help? I mean, what is the benefit in giving to them through you?"
That's useful for some types of grants. But not really applicable to eg research into AI safety?
Good point. Because it is to prevent something bad that hasn't happened yet. No victims yet!
Another place direct cash charity fails is in sending cash to victims of a currently active natural disaster. You cannot reach them. Have to use a "middleman" such as the red cross.
I guess the only possible response is "Sorry, I don't feel like I can evaluate this since you've just brutally dumped me via grant application."
If you stumbled backwards into the concept of a VC field, I think it's plausible that you might do well stumbling further into "incubator" territory. A lot of your issues seem to stem from idiosyncratic persons who haven't written a grant proposal in their lifetimes, especially not a successful one. The shoe-string version could be a virtual session and a used copy of a proposal writing textbook mailed to the individual and 3 months before opening the door. Or you could consider something more personal and intense, Y-combinator style, or anywhere in between.
Self reply with anectdata: the number of people who would make intelligent use of a lump sum of cash pales in comparison to the number of people who would benefit from not worrying about their normal expenses for 3-6 months and some serious guidance for their unrecognized talents.
This suggests that one high-leverage opportunity is to write a "good grant proposals 101". On second thought, I'm not sure if this is good on net...
Why not simply judge whether or not you’d be very happy to fund each grant, and then just randomly choose grants to fund from among those? Much simpler to administer, avoids tying yourself in 4D knots, and the losers of this process can then be publicly posted with no negative signaling risk.
Here’s some info on the idea: https://www.nesta.org.uk/feature/ten-predictions-2019/random-approach-innovation/
Another advantage: minimizes unknown unknown bias — that is, bias that’s baked into the scientific fields themselves.
I think this absolutely makes a lot of sense - there's some point in the evaluation process when the difficulty of fine-graining more instances at the boundary gets much higher than the expected value of getting that decision right, so you might as well switch to random at that point. And since that point is where you start getting a lot of the perverse incentives about pitching to the specific evaluator, by randomizing at this point you can reduce incentives to do that.
Two aphorisms from VC types I know (who have actually made serious money)
- you bet on people, not companies/technologies
- usually it's more important to make a decision (any decision), but make it fast enough, than to make the optimal decision.
Regarding the latter, one of the things that constantly amazes me about when I see these people in action is precisely this "make a decision fast, and don't look back" aspect to how they live their lives. This is not a claim that the decisions are thoughtless, or coin flips; rather
- the appropriate amount of pre-thinking has gone into the issue so that when the specific circumstances require a decision it's an informed decision
- excess time is not spent on trivial unimportant decisions (to paraphrase "just buy the first one that looks good enough on Amazon; it'll probably do the job, and if not you're out a few dollars which is worth less than spending five hours on internet shopping research"
But I will be the first to admit that this is not an easy skill to acquire, especially the learning not to regret whatever decisions turn out incorrectly (as some inevitably do).
That's surely true at some point but I'm not convinced that it is realized here. It stands out to me that the extra info Scott kept getting was the kind of info that massively affected his subjective estimate of proposal success. Isnt it usually only when it only makes small differences that it makes sense to stop looking for more info?
Indeed, isn't there a common fallacy where ppl stop trying to save money on big purchases once it drops below some percentage of the price rather than looking at the absolute size of the effect?
Administering this amount of money seems like something where spending an extra couple of weeks of full time labor picking better outcomes might be worth it.
But if it becomes known you're doing this, then you'll get a lot more submissions of "just good enough" grants by people who want to exploit the process. Heck, I'd spend twenty minutes writing a decent-at-first-glance grant proposal if there was a chance of a free $50K.
I really don't get this -- if people could write lots of "just good enough" grants (which, remember, the line is "Scott feels very very happy funding it at first blush", not "Scott vaguely feels good about it"), then why wouldn't they also submit those to the actual grant program? If you can write a decent-at-first glance proposal maybe you SHOULD get the $50k!
The supposed internal reason for feeling good about grants is believing that they are actually used to achieve the proposed purpose. Melvin here talks about having zero intention of actually following up with their grant proposal, and having zero faith in it actually working out. The cause is unachievable by default
People can already write extremely bad-faith grants to the existing process, though, and they didn't appear to? Why would slightly tweaking the approval process change who is applying? Do you really think you'd get an influx of fradulent grants under this scenario?
When people write grants they by default assume that their work will be professionally reviewed (and that's what Scott really tried to do here). They assume that half assed grants without any real perspective have approximately zero chance of winning. Preparing a convincing grant is a lot of work for something that has zero chance of winning.
However, if people realized that their grants are not checked too thoroughly, it would push a lot of them to try to pull a scam, as they now know they have a real chance of getting money. Of course, this only happens if information about Scott's decision process somehow becomes known to the general public. And there's a big difference between "being a bad reviewer" and "people knowing that you are a bad reviewer". However, there's still a risk. Like, Scott is doing another public post about grants, and then some commenter points out that it's clearly obvious that "grant X" cannot possibly work and everyone who researched the topic even for a minute would know that. And it would be a big hit
They absolutely do.
Google "john wittle education grants", try to find my tumblr post series, or my lesswrong posts, or the time Scott talked about my comments back on SSC.
Yeah, but you have to account for how you spent that $50k. If it's "I bought four houses" or "I paid my ex-boyfriend" (see Freddie's post here https://freddiedeboer.substack.com/p/white-journalists-are-terrified-of) then eventualy you have to pay it back or make some return of money you obtained deceitfully.
Of course, if you're a professional scammer, you probably spend *another* twenty minutes weasel-wording the proposal such that "yes I bought four houses, see the small print, and these houses were for the benefit of the charity (i.e. me)" gets you technically off the hook.
Because if that decreases the quality with which I spend $1 million by 5%, I've wasted $50,000 = ten lives.
You've already made the point very eloquently in this post that you don't have enough wisdom/discernment/predictive ability to allocate the grant money to a level of percision of 5%. By doing the grant at all you've already accepted that you will misallocate some money, but that's fine because you'll get some big wins as well and (like an angel investor) you'll "beat the market" by saving more lives than donating everything to a charity. So I think it's more like:
1) Go the way you did, beat yourself up judging the grants to the point that you're miserable, misallocate the grant funds by 20% anyway due to bias in your reviewers. Also, systametically avoid funding some really good grants because of biases in your experts.
2) Go the random route, misallocate the grant funds by 20% due to your inability to even do the "smell test" well. Avoid lots of bias for free, maybe pick up a grant that "everyone knows" can't work, and have a really easy time administraing the grant. You feel really good and ready to do it again next year.
Imagine if you had done it the random way -- you would have personally had a better experience and could more heartly recommend the grant process to other people, instead of saying that it's likely to make you miserable!
Why do you assume the random route has the same level of misallocation as the route Scott did take? Funding based on feelings seems like it'd very easily get a higher error rate then funding based on research Scott isn't super good at.
I mostly just don't trust experts a huge degree. I think it's sort of like how monkeys throwing darts can do better than a lot of hedge funds. The key insight is that if you have experts involved they often miss things that everyone "knows" are impossible. I've seen it in SO many scenarios that it forms a big part of my intuition but it's hard to communicate and sounds silly. I also think that randomness has powerful salubrious effects in avoding weird bias.
I also think Scott has better intuition than he might imagine -- I trust Scott's gut feeling followed by random allocation more than I trust any panel of experts.
That's a fair critique of investment management experts (and mostly true!) but saying you don't trust experts in general is pretty significant! I mean, i trust a surgeon (an expert in surgery) to do a better job operating on me then i do a random non-expert, so i think there's a significant degree in which trust in experts is an important thing to have. Not total trust, mind you, a better surgery would probably be done by having an expert AND a non-expert observer (maybe), but i think having an expert at all is really important.
I mean, as Scott posted in his examples, how should he differentiate between 2 biomedicine grants that he knows very little on without consulting experts? Going based on emotion here, if you trust Scott to be accurate in his reported abilities, would just be a coin toss.
To respond to your second point: I think Scott should read the proposals and then use his intuition to either qualify or disqualify each one. Put the qualifying proposal(s) on the "to fund" pile. Then when he's done randomly allocate funds. If both make it to the pile, then they have an equal chance of being funded. I think in Scotts's case his intuition will be reasonably accurate, and in any case it's him being directly responsible for the process and not defering it to others, and not stressing him out too much so that he and others do it again. Sometimes you can't differentiate things well, and that's OK. I don't find the agreement between the experts to be that useful, it's exactly what you'd expect if the field itself has biases, and at least for grants it pays to sometimes buck the trend. I trust Scott's intution to be simply better for these purposes than experts.
To respond to your first point: I obviously have something like "trust" -- I ride planes, I drive a car, I use the internet, and I would address a severe medical problem by (very reluctantly) using the medical system. All of these things are built by people and they do work (and in cases like planes they work very well!) But through experience I've learned to sharply limit my trust to avoid it "leaking" in ways that aren't warranted.
For example, I believe that artifacts produced by people like cars can work well. I don't believe that a group of "auto experts" would be likely to evaluate grants relating to improving automobiles better than Scott. I don't expect them to be able to build cars themselves if they were stranded on a desert island. I expect that for most of their beliefs about cars, they haven't put in the effort to truly evaluate their beleifs against reality. Instead their beliefs are a copy of whatever their field says, and likely to be all biased in the same way. In reality It's even worse than that -- I expect that in a group of car experts about half of them wouldn't know very much about cars at all!
I believe that something like this is true for practically all groups, and it's my default assumption. I believe this because it's proven true in my life over and over again in many fields in which I AM an expert personally, and I've actually updated on that information instead of Gell-Manning my way through the rest of my life.
If you can really understand that investment management experts aren't good at their jobs on average, why shouldn't that knowlege propogate to other groups as well?
(note: a few very rare people can genuiely be good at things and deserve a lot of trust; these are the people who design the stuff everyone uses. But these are individual people, not groups. I think Scott is a worthwhile individual person to judge stuff and that most modifications to this simple process will just make things worse.)
The reason why that is true in finance is that prices have been optimized to be fair, so it’s hard to make good or bad choices. That is…not true of the grant pool, probably even after an initial pass.
Side note: one thing that I've recently learned is that a lot of hedge funds exist not to generate abnormally-high returns as much as returns which are uncorrelated with the market.
(You probably meant managed mutual funds which are trying to beat the market; that I missed the goal of hedge funds for so long makes me want to share it whenever I can.)
Well, the reason you want uncorrelated returns is so that you can average many of them together and slap on leverage, to get higher absolute returns.
In any case, the goal of hedge funds is to make money for the people running the hedge fund. And that mostly means having a lot of money under management for a long time.
See Matt Levine's Money Stuff for details.
I'm not sure it makes sense to say I'm not good enough to allocate to a level of precision of 5%. If I am terrible but try hard, I might go from a D- to a D, which is 5% better.
Fair enough! (assuming you mean D- to D?) I personally see trying really hard to evaluate grants more as trading away the bias you know for the bias you don't know, but I can see how a lot of work, at the margins, might allow you to predict that a grant won't work because it's got some fatal flaw. (but then again, the grant that really gives you 10,000x return will probably be an opportunity that hasn't been funded yet because it SEEMS like there's something wrong with it but in reality there isn't.)
But in any case, is that extra work really worth it compared to a more streamlined approach? I think that at the level of $1.5MM it's genuinely worth it to burn 5% ($75,000) just to make it easier for you personally, since it means the difference between you feeling like it wasn't too hard vs feeling like it was an ordeal. That's important for establishing things like this as a community tradition. I really wish you could have written a post saying that the grant is actually fairly easy and fund to administer and that's worth a lot in and of itself!
Another way to think about it is that lots of charities have MUCH higher overheads than 5%. If that's the price for making it sustainable for you then it seems worth it to me.
I don't know if this is reasonable, but it could be possible to see this year as an investment that's mostly about learning how to evaluate grants, with being effective manifesting in future years.
>VC-ing is a field as intense and complicated and full of pitfalls as medicine or statistics or anything else.
Here's a really interesting article by Tucker Max about why he got out of venture capital.
https://www.tuckermax.com/why-i-stopped-angel-investing-and-you-should-never-start/
Short version: it takes way more time than you'd think. To be a successful angel investor, you pretty much have to know your founders' businesses as well as they do themselves.
"Even though angel investing looks like this casual, easy and fun activity, make no mistake about it, if you want to avoid losing your shirt, you spend a LOT of time on it: finding deals, vetting companies you’re interested in, and then once you invest, working with them like hell to make them succeed.
Just one example: I invested in a custom dog toy company, PrideBites, and have probably spent at least 500 hours over two years learning about the dog toy space, the dog retail space, and the complexities of Chinese manufacturing and logistics (so I can better advise them). Not to mention, another 500+ hours I’ve spent with the team helping them through all the hundreds of issues that come up. [...]
That’s almost a full time job–and it’s only ONE company. "
(yeah, that's the same Tucker Max who used to write stories about getting loaded on Everclear, fucking midget strippers, and having diarrhea in hotel lobbies. He's had some life changes.)
Nitpick: 1000+ hours over 2 years is roughly a quarter of a full-time job (at least in my part of the world, not sure about the US). But Max's point is well-taken.
Re: the "impact certificate-based retroactive public goods funding market" -- you may want to check out social impact bonds if you haven't already: https://golab.bsg.ox.ac.uk/the-basics/impact-bonds/. It sounds like a somewhat similar concept, albeit without the prediction market component. It does, however, have the advantage of having been implemented in practice.
You mention Tyler Cowen a couple of times, but one nice lesson from Stubborn Attachments, is that when uncertainty abounds, look for the option with the largest upside, which rises above the froth, where upside is defined from an optimistic perspective. Interestingly, this ties in to a core idea in machine learning 'optimism in the face of uncertainty'. The one sentence claim there is that, if you are optimistic and try something either 1) everything works and great things happen or 2) everything goes wrong, but you learn a lot which you can use to update your world model for future grants. You can also update everyone else's models, when they see which funded projects succeeded.
That's a great quote. If I ever snap and become a genocidal dictator, I'm going to put "optimism in the face of uncertainty" over the gates of my extermination camps.
That's the spirit!
Re: grant-writing, for my fellow scientists out there, the best advice I ever got was to focus on "knowledge gap". You have to articulate what is the thing we don't know that you are going to figure out. Then you explain how you will figure it out. This isn't like some essay where the overall concept builds over the course of X pages; you have to articulate the knowledge gap explicitly enough that it's crystal clear to someone reading ten grants in one day. You can even put the pivotal sentence(s) in italics for extra emphasis.
I tell this to you so that if your ideas are better than mine but your writing is worse, you can beat me fair and square.
I think that making ballpark guesses might have been preferable. Once you rely on others, you might end up with a selection by committee that looks pretty similar to what existing orgs are already doing, except that they have more experience doing that. But your own perspective can't be replicated by others.
The bigger problem is simply that there were too many proposals to sift through. Maybe a shorter word limit could have helped, as you can always request more info later.
I wonder if it would be reasonable to require some small payment (e.g. $20) to apply. The proceeds could go to charity or to increase the grants pool, and it could reduce the nonserious applicants by a lot. Though I also expect there'd be some good applications that would be put off by this, more negative feelings by people who aren't picked, and maybe even some legal issues...
There may be a local Community Foundation or Charitable Foundation in your area. This is an entity that allows people to set up their own charitable funds under the legal umbrella of the main entity. These can be funded in various ways and operated in various ways, including donor directed. That might be an answer for your money transfer issues.
"This grants program could be the most important thing I’ll ever do."
Almost certainly not. You have a proven ability to write outstanding, insightful blog posts that get thousands of exceptionally smart people thinking, some of whom will be influenced to do important good things (or stop doing important bad things).
What are the chances that you're *also* astoundingly good, or even pretty good, at administering grants programs? Small.
Stick to your knitting. Find someone else who's good at the grants stuff and get them to do it.
In the past I would've been persuaded by an argument like yours, but since reading Scott's https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/ I'm not so sure. This, as I read it, is the context behind his sentence you quoted; the "could" captures the very large error bars around his uncertainty on this
I think my writing is influential because it causes things like people giving me $1.5 million for grants. I think allocating $1.5 million to charity is inherently worth a *lot* of writing, especially if it's true that it can save ~300 lives.
I think you're underselling your writing here, which has made a big impact on a lot of people doing important things; it's hard to say how many lives it's saved, but it's definitely added a whole lot more than $1.5 million in value to the world, however you want to measure that. Having said that I think the grant program was also a great idea simply because these skills seem pretty correlated (or rather, both highly correlated with understanding and evaluating complex systems).
Scott, $1.5M is pocket change compared to the *existing* influence of your writing, let alone the marginal influence if you spent the effort on more blog posts instead of grants. This is not hyperbole.
One example: Figure the net present value of increasing the chance of abolishing the FDA by 1%.
Stop staring at the coins in the sofa cushion - there's 10 billion dollars in your living room.
“The ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist.”
― John Maynard Keynes
He was right about that (at least), and it applies more generally to ideas and viewpoints that shape the world.
Stick. To. Your. Knitting.
I share the intuition that Scott's writing is way more valuable than his grant making. But I view this sort of activity as collecting experiences that will enrich and inform his future writing (it's already happenning!).
Fair point. But Scott has only 24 hours/day, like the rest of us (I can barely keep up with his *output*).
Money, and the dream of power that goes with it, has an amazing power to distract and corrupt.
Indeed. And this is also related to https://astralcodexten.substack.com/p/why-do-i-suck
Scott writes his insights that are a result of his experience. If he stopped having new experience... well, his backlog of insights is probably long enough to sustain his writing for a few more years, but maybe the quality would drop, because not all insights are equally important or equally inspiring, and the best ones were probably already used. The experience with grants was new, and as a side effect we got a high quality article on a new topic.
Also, there is a potential for future articles, if Scott decides to investigate the outcomes of the grants.
"Another person’s application sounded like a Dilbert gag about meaningless corporate babble. “We will leverage synergies to revolutionize the paradigm of communication for justice” - paragraphs and paragraphs of this without the slightest explanation of what they would actually do. Everyone involved had PhDs, and they’d gotten millions of dollars from a government agency"
As they say, the answer is in the question. They're accustomed to writing proposals for government grants, which means they have to shovel in all the buzzwords to show they're hitting the goals for which the grant is established.
My job before my last job was like this; a new thing that came in was a monthly report from every centre about hitting a bunch of goals set under targets listed in separate sections in a newly-created overarching statement of achievement.
What it meant in reality was "what did we do this month?" "well, a bunch of the same stuff we do every month" "okay, pick out three things, then trawl through all the bumpf and pick out headings to say we did this, that and the other", which I then wrote up with copious chunks of the jargon scattered on top.
You couldn't say "this month we recruited six new members for our programme", you had to show how this came under Goal No. 15 of Section 4 of Unit 12. It was feffin' stupid, everyone involved knew it was feffin' stupid, the ideal was that every centre was sitting down for a meeting at the start of the month to plan out our New Goals For This Month but in actuality it was "oh crap, the monthly board meeting is in three days time, quick, write up a report about our New Goals to be presented there" and I was one of the people just pulling stuff out of - the air for such reports. But it filled the real aim of the exercise, which was "here is a Thing we are supposed to be doing, and now we are reporting that we are indeed Doing The Thing".
Bureaucracy, what a wonderful thing!
My current job partly involves doing this as well, so your comment is depressingly familiar.
Yeah, any public/civil service job will be the same. I think it's because a lot of things come from top-down initiatives ("the government has decided that they will encourage healthy eating, so this is The Year of the Cucumber!") and because a lot of funding comes from The Government, you have to comply with the strictures.
So even if your work has damn-all to do with cucumbers, you still have to Show How This Project Encourages Cucumbers (Growing, Eating, Making Available To Public, Pickling Or Raw).
Part of the problem is that you get directives and policies set by people who come out of environments (like academia or business, governments love trying to run public service along business principles even if it's like cramming a square peg into a round hole) where it's *natural* for them to use jargon like "leveraging synergies" so they write up proposals full of this type of guff, which get converted into policies, which get sent along to your particular organisation as "this is the model we want you to use".
End result - you can't write basic descriptions of what you actually did, like "recruited new clients for programme", it has to be the synergistic paradigms stuff.
I am once again reminded about how amazingly backward the USA is in something as basic to a market economy as 'making payments to people'. And I guess the entire world, to be fair. It's just surprising that a country with an image of itself as such a capitalist Mecca would be so bad about it. Or maybe the fact that the payments systems are so heavily gatekept is the lesson in itself?
When you look at a field that involves literally millions of people and literally billions of dollars and see issues, it may be that you are some visionary who can see further than others.
But more likely is that you simply don't understand many of the issues involved, from security, (un)revocability, and fraud tracking to tax to the law (Patriot Act puts severe constraints on randomly moving around more than $10,000 to unclearly verified third parties).
My understanding is that the main issue with money transfers in the US is that there's just so damn many banks. In smaller countries it's easier to get all of the banks to agree to some standard, but the US is overrun by tiny local banks where the whole computer system is a bunch of Wangs run by some guy called Steve.
That does not excuse large banks solving or being forced to solve that.
From 1927 - 1994, interstate branch banking in the U.S. was forbidden by the government. This had all the usual fun unintended consequences, like more bank runs (because less geographical diversification, and thus more exposure to things like droughts in farm country, natural disasters, etc... compare to Canada's better experience) and lots more smaller regional banks than a market system would've ended up with.
On top of that, the primary clearing house in the U.S. is run by the federal reserve, yet another government agency. Again, not usually going to stay ahead of the technology curve (see also SWIFTNET).
Finally, as noted above, the financial industry is one of the two most heavily regulated industries in the U.S., the other being healthcare/health insurance. As a result, there a many, many regulatory barriers to moving money around, and very few encouragements. Most of those involve required paperwork for you or your bank (or both!) to do, lest you, for example, be considered a money launderer.
Interesting. So the big banks like Bank of America, Wells Fargo and Chase were local banks as recently as 1994? Or they had some weird structure that allowed them to operate nationwide?
Some of them operated as bank holding companies (and still do). So they'd have a local Wells Fargo CA and Wells Fargo AZ, and a Wells Fargo Commercial Banking, and a Wells Fargo Investment Banking (to fall outside the rules) which would be organizationally separate, but all owned by the same set of investors.
This basically started in the 80s when states began to legally allow holding companies to own banks in multiple states.
USA is hilarious behind times. Not even using outdated from of credit card, from what I heard you still uses checks and bank transfer between different banks take longer than a single working day.
For reference, in Poland free transfer get within 12h on working day between any two bank accounts and if you pay extra: within 15 minutes.
Disclaimer: I have never moved 1 500 000 $ or similarly large sums, but as far as I know large companies use the same system.
It is because of SEPA system and IBAN (standardized account numbers) in the EU. This is one good intended feature of the EU.
On the other hand, my first response to Scott's dilemma was "you know checks are still a thing, you could just write one and put it in the mail?" At least for the US grant recipients. Checks are not the *most* timely and convenient way to transfer money, but they do at least bound the inconvenience and at a level below what it seems like Scott was up against.
I used to do freelance work for US companies and most of them paid by checks. During 7 years I think I had 2 or 3 cases when the check was lost in mail and had to be reissued again. I wasn't bothered that receiving money via checks was very slow process. My biggest problem was that European banks charged high fees for depositing them.
I think that the US kept using checks longer than other countries because of convenience. Indeed, the time to write a check is less than sending wires and probably less prone to errors. Although cost of fraud could be higher.
Except that the USA is the odd one out. For the longest time, I used to say that America for some reason was developed in other ways, but had the consumer banking sector of a developing country. It made sense because the rest of the developed world was just so much ahead of the USA.
This no longer works, because today most of the developing countries have better banking than the USA does.
Is it really that easy to move $100,000 in a random developed country? And what are the rates of fraud?
In European Union it is trivial (3 minutes of work, delivery within 12h of working day for free, within 15 minutes if you pay for quick transfer).
Fraud happens (usually in form of social engineering of various kinds) but seems less impactful and easier to avoid than check-based forms of fraud in USA.
I strongly suspect that if Scott had got this “Medallion Signature Guarantee” (which probably is a simple electronic signature generator device), his experience making wires wouldn't be much different from European experience.
The difference is probably that most Europeans have never used checks and for them it is natural to get a cryptographic device or nowadays simply an app on your mobile that you have to use to sign your bank transfers whereas in the US most people have inertia to spend time to figure it out, even though it would make their lives easier.
Also you can imagine that Scott's time is now very valuable. Apparently his income stream is at least $5000 per day. If he needs to make about 100 payments and each payment takes about 10 minutes (let's be real, he might need to look up for account details for each payee, sometime clarify them, double check entries for mistakes, electronically sign them etc.), it would take 1000 minutes or 2 full working days ($10,000 worth of his time). It is much more reasonable to outsource this work to someone who is more experienced and can do it cheaper.
Bad side of European banking system is that verification method is often either app (commonly refusing to work on degoogled or rooted Android phones and with other similar issues) or SMS (SIM card hijacking is easier than most people expect). Proper separate devices are rare.
> I take it all back. The crypto future can’t come soon enough. Sending money is terrible.
We could also just change the regulations or use various interventions to actually get it up to standard. FinTech in the US isn't as bad as some places but there are places that totally eat our lunch. The Chinese have managed to do better without using crypto. Or the Koreans if you want a democratic version.
> Big effective-altruist foundations complain that they’re entrepreneurship-constrained.
They're wrong and/or lying. The incentive of funders is to encourage people to apply. It makes them look more selective and puts them in the position of rejecting rather than soliciting. Sometimes they find a gem and they're all very adept at tearing through large numbers of applications. A bad application can be quickly dismissed by some admin or another. Further, people looking for funding will generally do a circuit where they apply to dozens or hundreds of places.
The other two (an advantage in getting funding and an advantage in evaluating applications) are the important ones. As you've discovered, there are people who optimize around being fundable to grants. It's practically a career and there are entire industries of consultants. The real advantage would be having some ability to identify people who can accomplish the goals of big charities but don't look traditionally fundable. The people who get rejected by everyone else. The issue with this is that it's hard. The thing about Harvard is that 90% of Harvard MBAs are going to be a solid B. A few might be A+s. But B- is as low as most will go. The general population can range from A+ to F's and you need to find a way to figure out how to determine it a scale.
Anyway I've never ran a grant program but I've definitely worked as a judge. Will it make you feel better if your experience sounds typical? I'd say applications usually break down something like:
50% so awful they have no chance
30% that are okay but definitely below the cut
5% that are good or great but just not a good fit for this particular grant/program
10% that are good but not great. A few of these get through if they fit a particular need.
5% that are really great. These are your pool where every cut hurts.
And then the occassional, rare slam dunk that's definitely getting in. But these are rare enough you often get 0 in a round.
If you have to adjust the numbers: subtract from the good ones and move it into the worse tiers.
(How does this square with not being entrepeneur restrained? Most grant programs have tiny, tiny acceptance rates. Let's say you have a pool that's only got 7% good-great applications. If you have 1,000 applications that's 70 good ones. If it's a $50,000 grant program that's $3.5 million. More money than you probably have. If you have $1.5 million then you're accepting about half. But you're funding constrained, not entrepeneur constrained. Still you're encouraged not to think of it that way for the reasons above. Besides, you really want to have all top 1% applications so you can convince yourself the 5% was a marginal application.)
PS: I'm curious to hear about this AI charity stuff. I'm got connections to the non-profit and such world and I've never heard of these. But I might be on the wrong coast?
>We could also just change the regulations or use various interventions to actually get it up to standard. FinTech in the US isn't as bad as some places but there are places that totally eat our lunch. The Chinese have managed to do better without using crypto. Or the Koreans if you want a democratic version.
In the US can't you just mail a cheque? You gotta pay for a stamp and it takes a couple of days, but that's better than paying 2% to Paypal.
That was my thought, too. There are some limitation I know of: depositing checks can be inconvenient, especially checks large enough to trigger procedures for detecting fraud and tax evasion, and they can take several days before the funds are available (due to the US still using a check-clearing automation system that was first set up in 1972). But unless I'm missing something big, it seems like a better option than PayPal or wire transfers.
PayPal has also large fees and repeated cases of stealing (AKA freezing) accounts.
Sending by mail is much slower and less secure. People can and do steal such mail. Besides, in some places you can have a completely secure transaction that takes a matter of hours for big transactions. Often less for smaller ones. That's just clearly superior to mail.
Mail theft is quite rare, at least if the organizations on each end are tight enough not to have internal mailroom-theft issues. And if the check is stolen, so what? Cancel it and send a new one. Yes, that means the check takes three weeks to receive and clear instead of two - but what's your time value of money if you're even *thinking* of eating a 2% transaction fee rather than taking 2-3 weeks to clear the transaction? Checks are slow, *slightly* inconvenient, cheap, and reliable.
It's worth pointing out that, like job applications, grant applications get a lot of bad applicants because the people that get rejected then go on to apply everywhere else, while the people that get accepted get on with doing the job and aren't just sending out constant applications.
This isn't true. Most people stop applying for jobs once they have a job. Few people stop applying for grants once they have a grant because they can always use more money.
"There wasn't as ready-made an EA infrastructure for biology, so I jury-rigged a Biology Grants Committee out of an ex-girlfriend who works in biotech, two of her friends, a guy who has good bio takes in the ACX comments section, and a cool girl I met at a party once who talked my ear off about bio research. Despite my desperation, I lucked out here. One of my ex’s friends turned out to be a semiprofessional bio grantmaker. The guy from the comments section was a bio grad student at Harvard. And the girl from the party got back to me with a bunch of detailed comments like “here’s the obscure immune cell which will cause a side effect these people aren’t expecting” or “a little-known laboratory in Kazakhstan has already investigated this problem”.
Was thinking about this, does anyone know if there's some sort of expertise swap out there? I often need to know weird things about other disciplines; I have access to a leading experts on bees, woodpeckers, javascript, and chrome book touchpad drivers. This is very useful on the occasion I need to know about one of those things. I don't know any chemists, which is very irritating when I have big chemistry questions. I assume someone out there sometimes has questions about CS pedagogy or AI that are similarly going unanswered.
Courtesy of another ACX commenter: https://crowdfight.org/
You could try using the relevant StackExchange or Quora for a cheap on demand version. For learning existing content you might try Twitter or YouTube. Working at large consultancy or similar professional employer with lots of different disciplines represented will also let you play telephone very effectively.
There's a lot that's very interesting here, but one confusion I see throughout is
- are you funding a CAUSE or are you funding a PERSON?
Because (the way I see it, anyway) at this small money level, you are very much funding a person, not a cause. (You can only claim to be funding a cause independent of people when there's a whole infrastructure of multiple people involved...)
And that simplifies the problem tremendously. It doesn't matter how great the cause appears to be if the person charged with implementing it is incompetent, deluded, fraudulent, naive, or all the other various relevant pathologies. So you can immediately weed out everyone who gives you a bad vibe in their application, even if you can't quite put a finger on it.
Depending on exactly how many grants are left after you weed out
- person I simply do not believe can do the job with the money AND
- cause I do not care about enough to fund
my next filter would be, is there anyone, anyone at all, in the list who shows any proof of work ability of anything, anything at all?
This sounds harsh, so the question is what is the goal here? If the goal is "give out money and help a few people", well, credit card debt in the US and medicine for Africa. If the goal is "actually *achieve* something with high leverage", how about starting with someone who has achieved something in the past?
Now we get to the contentious area of "how many people are actually capable of doing anything whatsoever" where, uh, let's say, opinions differ widely. But I would say, based on my limited experience of either seeking a job, helping others find jobs, or helping others hire people, that the easiest thing in the world (for the actually competent) is to show proof of their competence:
- you want a computer job? Give a link to a great program you have written.
- you want a graphics design job? Show an imaginary campaign that you created.
- you want an electronics job? Show an interesting project you created.
These are not very high bars. And yet, 95% of people cannot clear them. This doesn't make them non-citizens, or inadequate drones. But it does mean that they are interchangeable, non-special people. They simply do not have the spark of drive and originality that makes them capable of just leading projects (even if "leading" means "do your own work without daily oversight and supervision") let alone creating something truly new and taking it to completion.
So, unfair though it may seem, that's what I would demand from an applicant -- *proof* that you can *achieve* the task (or at least make a good effort).
This is, IMHO, what Peter Thiel is doing with his infamous question (“What important truth do very few people agree with you on?”). The point is not the answer, it's to show that there is some degree of originality in this person's thought. There are multiple ways of getting to the same point, but every one of them boils down to
- I know you claim to be able to achieve this, and
- you may even have a credential (or reference or whatever) to that end, but
- talk is cheap, and many credentials are a lot easier to obtain than they should be, so
- SHOW ME something relevant to your supposed ability to achieve the goal.
And if the person cannot do that (believe me, you will hear an endless stream of justifications about how 'been so busy with school', 'never have time to think', 'my deprived childhood', blah blah. All of which may or may not be true, but very much do indicate that
- the person (unlike the obsessives who achieve things) does not prioritize this task above everything else, and does not think about it night and day, every night and day; and
- that they don't even see anything wrong with this (which means they will neither be able to achieve the task themselves, nor have any competence in hiring/working with those who might take up the slack).
There are now several people in or known to the rat-sphere who have done a microgrants program. Can they create a document with lessons learned so that the next person who tries it can have something to go off of?
Regarding the future version of this, I hate to be the crypto guy but I think it would be a great fit, though I would make the tokens fungible.
The way I would do this is - you make a platform for submitting proposals. Then, for the ones you’ve approved and committed to(or anyone else who wants to for that matter), we run a token sale, where people buy the proposal’s token for dollar-backed stablecoins, with a threshold so that if the project doesn’t get enough funds to execute everyone gets their money back.
To keep the math simple let’s say we’ll issue the same number of ProposalTokens as the number of dollars committed. Then at the end of a successful project, everyone with ProposalTokens can exchange them for the corresponding amount of dollars.
This way, you don’t need a single investor to cover the whole amount, it can be crowdfunded. Furthermore, there will be a market for each project’s tokens, and the price of the ProposalToken would function exactly like a prediction market for the success of the project.(And implicitly the trustworthiness of the commitment, unless you want to lock up the rewards ahead of time)
You could even make it so that the team that’s working on the proposal cannot sell all of their ProjectTokens at once, but instead have to do it in batches over time. This way, they are incentivised to keep everyone updated about their progress, so that they can raise more funds at better valuations.
The only thing is though, there would have to be some upside for the investors to lock up money for a year, some gap between what’s required to execute the project - I guess that’s the price of doing this retroactively. Or maybe just the fact that you’re able to help a charity for $0 is enough?
I’m sure this is along the lines of what you’re thinking already, just wanted to share my 2 cents
Austin from Manifold Markets here; I've been thinking about this problem from the opposite perspective, in terms of the opportunity cost that grantseekers pay to navigate the EA funding landscape (my back-of-envelope estimate for Manifold was $3k in time spent). My own proposal was to consolidate the different kinds of funding applications into a single "Common App": https://blog.austn.io/posts/proposal-a-common-app-for-ea-funding
Ideally, this platform would also allow grantmakers to better coordinate on which projects they want to fund, and allow new microgrant creators to easily get started (I saw that Scott Aaronson and Nuno Sempere both started microgrant programs patterned off of ACX).
An impact certificate model also sounds like a great idea! If Manifold's infrastructure or technical expertise would be useful, let me know (akrolsmir@gmail.com) and I'd be happy to help.
I heard this at a much higher level a few years ago at a conference. He was a CEO that sold his company for some crazy amount of money and turned to philanthropy and was shocked at how difficult it was. Turns out that if you are giving away money and want to do it well (especially at scale) it calls for an organization and operational expertise. He since started spending his time working with similar folks who wanted to set up something like this but didn't know how to get it started. I never had thought about it before but when you think about it it make sense. The incredible weight of donating $100M but doing it "right" seems high.
Yeah, it's hard to give away money if you're not standing on the street corner handing out bundles of tenners to random passers-by.
First, if you naively just give out money, every begging letter writer, con artist, and scammer is going to target you.
Second, even if you give it to a particular charity or good cause, if it's a one-man band operation it may eventually implode due to clashes of personality, the guy in charge burning out, or he decides stubbornly to do things his way which is not the best way. There are also potentials for scandals within the organisation, as here in an Irish case:
https://www.devex.com/news/we-were-wrong-says-head-of-irish-charity-goal-90781
So you need some way of winnowing out the fraudsters and the inefficient, and that's a big job if you have no experience and are only in this as a source of cash.
Well I think you did a great thing. And there is enough tail uncertainty that it’s possible that you funded something albeit with a low chance of very high impact like VC. There is a lot of chance. And while eg a malaria charity has a pretty certain outcome, these grants might plausibly have higher or at least very different type of expected outcome. I think at the margin more grants probably enriches the giving ecosystem which is a positive.
> Because you have a comparative advantage in evaluating grants
Cackles maniacally: https://forum.effectivealtruism.org/s/AbrRsXM2PrCrPShuZ
Thanks for writing this. It gives me an appreciation for why so many philanthropists choose to spend their donations in ways that are perhaps less effective but a lot more predictable. If you buy a new building for the local university then it's almost certainly not the most effective way of spending that money, but at least you know what you're getting.
I guess I have the same problem with Effective Altruism that I do with utilitarianism; it's easy to say in theory "just do whatever creates the largest number of utils, duh" but this is a heuristic you can't possibly apply in practice, so you're back to square one.
> I guess I have the same problem with Effective Altruism that I do with utilitarianism; it's easy to say in theory "just do whatever creates the largest number of utils, duh" but this is a heuristic you can't possibly apply in practice
I used to think like this too, and then I decided to look at GiveWell's cost-effectiveness analysis: https://docs.google.com/spreadsheets/d/1B1fODKVbnGP4fejsZCVNvBm5zvI1jC7DhkaJpFk6zfo/edit#gid=1377543212
[self-deleted due to being in poor taste]
On the other hand, she (apparently) got ghosted/forgotten, then got over it and still applied for the program, just to be rejected again - for being ghosted. I don't think you need to any ulterior motives to explain why she might have been a tad pissed.
Yeah, that could also be the case. It depends on details which we aren't, and shouldn't be, privy to.
In retrospect, that comment of mine was in bad taste, given that both parties involved probably read this. I think I was in too dark of a place to be commenting, as evidenced by my other comment on this post. I think I'll edit it into nothingness, but leave this up as a warning sign to myself (and others) about where my brain can take me.
I am the referenced party (substack name is a pseudonym) and all's well on my end! Scott masterfully told the story for humor (I laughed out loud), but there's a bunch of non-included detail to it that made the whole thing significantly more innocuous for both parties.
Ah, whew. Thank you for being gracious. :-)
I'm afraid I have a contrarian reflex where my mind often tries to find exceptions or alternate perspectives for stuff I see. And this combines with a defensive mechanism I developed to deal with a bad relationship, where I try to figure out the worst possible light in which some action could be taken. The result, these days, is dark humor, at least when it's funny. Otherwise it's just dark. And I probably shouldn't spring it on unsuspecting people. It's too easy for stuff to be taken the wrong way.
Let the big boys fund that stuff, no individual has enough bandwidth. If you want to beat the market, such as malaria etc, then you have to get directly involved. Pick someone that you connect with and aligns with your values, dig in deep with them, and make sure they don't get hung up on something you can prevent. Sometimes that will be money for the right thing, but the biggest value will come from holding back when you see that something would be counter productive or wasteful.
What you're doing seems like a good idea at first but can't really be better than randomly handing money out to winos on the corner.
I know it was probably heartbreaking in the moment, but I burst out laughing at "This remains the most stone-cold rejection I have ever gotten." Of all the consequences of a grants program I wouldn't have expected that one.
I’m really excited about the prediction markets-themed grant-making proposal! Wouldn’t it be more fun to open the initial round of investment (the owning of the impact certificates) to other ACX plebs like me who don’t have $250,000?
So you're saying Molly Mielke's Moth Minds may make microgrants more manageable?
maybe
Moreover, Moth Minds may mitigate migraines many morose microgrant managers must manage.
and might manefest a meaningful movement
Really liked your idea for funding grant proposals. Been reading up on prediction markets since discovering them here. Seems like there’s a need to do a dance for regulators to say “See?!?! We’re not gambling! We are just having different opinions about future events, recording those opinions, with different rewards for success!” Wondered if there was a way to do multi-year trading on those and maybe something to turn on a trickle of funding and build up as trust increases (based on -and this will do a lot of work- “some kind” of review).
I’ve often wondered if something similar to this could be done on a smaller funding scale, ie the city of Los Angeles does this for odd jobs and over time we fund and trust people to deliver sandwiches to the homeless or fill in pot-holes or even have the job of finding new odd jobs. That seems like a cheaper way to administer a city and a happier way for people to find something to do when they don’t want to be chained to a company.
- This was awesome for you to do and publicize, sorry you didn't enjoy it
- It's your money, you can do what you want with it
- You really do have the advantages you describe in section VI
- Doing your own grants as opposed to just donating everything to established EA orgs provides valuable hedging / information to the ecosystem
... but can we talk about Grant B?
ACX Grants gave an established academic $60k to jet around the world writing a book on a super trendy, politicized, non-quantitative subject.
One, that's not a long-shot project; it's a project that's not even trying. Even if there was One Weird Trick to Smash Patriarchy (which there isn't; that's "murderism" talking) this isn't the kind of work that would find it.
Two, this is a clear case where ACX Grants have zero comparative advantage. "Harvard degree" has nothing on how legible this recipient is to mainstream grant-making institutions, _and_ Tyler Cowen already wrote her a check.
Look, admittedly I also dislike that the recipient considers my presence as male in tech to be ipso facto problematic (https://www.draliceevans.com/post/smash-the-fraternity) but that isn't where my objections are coming from. I'm just disappointed that the ACX Grants program gave such a large chunk of funding to such a lackluster cause. And given the thought process described in this article I'm confused about how it happened, unless the decision was just outsourced to Tyler Cowen.
See my discussion at https://astralcodexten.substack.com/p/acx-grants-results/comment/4208051
Yeah, that's the one that makes me raise my eyebrows. Scott says that this person is working on the problem of gender equality in developing countries, but if I go by something they've already worked on, they've cracked it.
The short answer is "industrialisation".
Slightly more developed answer: if it's a village of poor goat herders in the mountains in the back of Nowherestan, then they will have rigid gender roles and traditional ways of life that mean women stay in the house, are obedient to the men of the family, and the men do all the public life.
If Amir from the goat herd village goes to the nearest Big City to get a job and earn money (because even traditional rural communities are not immune from the stresses of modernity), it's very damn likely Amir is going to run into women who are working outside the home for the first time in his life. This is going to have a *big* effect on his world-view (see rigid gender roles and traditional ways of life).
Also, if Amir meets a girl from his home village or neighbouring village, because they've both moved to the Big City to work and make money, that he wants to marry then due to factors such as the high cost of living (relative to poor goat herder village), the girl *has* to work outside the home, else they'll both end up living in a cardboard box in an alley because they can't afford anything better on Amir's wages (he's a poor goat herder, he's not going to be working high-paying, high-value creating, jobs).
This has knock-on effects such as "we can't afford to have kids/ten kids like we would do back home", so things like birth control and abortion are now part of modern Big City living life. Education, so that everyone can get better jobs, or at least Amir's kids (if he stays in the city, or if he moves back home because his kids might need to move to the city later) can get better jobs. This will include the daughters. Everything else then flows from that, plus the Big City is much, much, *much* more permeable to "modern/Western/Enlightenment/call 'em what you like" values such as 'gender equality', all in the name of "making money". (Nobody does things like this out of the goodness of their heart; industrialisation needs warm bodies to work in the factories, and women are as good as men to feed the needs of 'MelonPhones needs chip manufactories at low costs').
You need women in the workforce, not at home being wives and mothers in the traditional way. You need women with some degree of education. The needs of capitalism mean that the old, rigid, traditional ways get chipped away (or power-washed away). Amir may never go home, or he may go home as a rich (by the standards of his village) man who takes a wife and has the traditional life - but he's been changed by his time in the city, his expectations have been changed, and when his kids are in their turn heading off to the city, the changes will go further with them - and so on. Eventually, small backwards goat herder village values lose out to the values of modernity, even in the small backwards goat herder villages.
You recently wrote a post about why your posts aren't as good anymore. One of your reasons is that you're focusing too much on the community stuff.
This is a community stuff post.
I think if I assign some very arbitrary rating to how interesting one of your posts is, and some very arbitrary rating to how important it is, and multiply, that this will come out as the very best post you've ever written. You've lately done big work that isn't this cool, you've written cool stuff that isn't this big, but in terms of how good a thing you have going for? This is something great.
Thank you!
Definitely. This goes straight into the ACX hall of fame.
About second order consequences: saying that you gave X$ against malaria is nice but is, at least for me, easily forgotten, and probably doesn't lead to blog posts like this. This grant program, however, is itself a form of publicity. It shows that you and the people around you are ready to invest a lot of time in those kind of things, which, in my opinion, makes you appear way more serious about everything "effective altruism". Of course, evaluation that will be very hard (or will it? Maybe a poll could be a start), but I think there's something there.
While reading over this, I had the thought, maybe one that you already had in the process of running the grants program, that if I were running one, I think I would aim to prioritize funding charities and programs which will continue to exist and have some plausible means of making realistic assessments of how much good they're accomplishing over time. If you fund 50 charities, and only three of them turn out to be much good, that might still end up leading to better outcomes than just donating all that money to the Against Malaria Foundation, *if* the process allows you to discover that those charities are particularly worthwhile, so that you and other people can direct more funding to them later.
I think this encapsulates the same idea you expressed in your essay on Diversity Libertarianism. Trying more things, rather than a few known-to-be-good things, can be preferable if you have a process to iterate on the things you try which turn out to be particularly good. But, it's a lot less likely to be if you don't.
Hmmm...I was thinking kind of the opposite. Part of my advantage over big foundations is that I'm faster and nimbler, which suggests being more willing to fund one-time opportunities.
I do intend to email everyone in a year and ask them what happened, and write about anything I learn from this process.
If you think that the one-off opportunities wouldn't be funded by other foundations, but the iteratable (iterable?) ones would, then that's one way to take advantage of your position, but I'd need a much higher level of confidence to donate to causes that can't be iterated on.
Is giving tens of thousands of dollars to an academic to take a sabbatical going to produce several lives saved from malaria worth of value, or several people relieved of debt? It might, but my feeling is, it's *probably* not going to have more impact than the highest value charities. But the big problem is that if it does, I can't generalize from that that it'd be valuable to fund sabbaticals to other academics; the situation is too idiosyncratic.
If there's a probability distribution where there's a 90% chance that money given to a cause does less good than money given to the highest-rated charities, then a significant part of the remaining 10% is probably taken up by the possibility of it doing a little bit better than the best-rated charities, but not a lot. So, it's probably not going to exceed the expected value of the best-rated charities with a one-time donation.
But, if it's an opportunity you can iterate on, a 10% chance of finding something that could be even higher-impact than the best-rated charities, which people could then direct more money to, could give that one-time donation *very* high expected value. It doesn't have to turn out to be much better than the current best charities for the numbers to turn out favorably.
I continue to be amused that the difficulty of operations keeps surprising you. You were surprised that the Meetups Everywhere project turned into such a recordkeeping headache. You were surprised at the logistical complications of running the Adversarial Essay and Book Review contests. And now I hear you were surprised by the challenges of paying thousands of dollars to dozens of parties. I’m glad in all these cases that you find someone after-the-fact to help rescue you from the administrative quicksand, but I’d think by now you’d be better able to foresee the need before you start these projects!
Ever since I read John Salvatier's http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail my default expectation for stuff I don't have prior experience in has been "this will be a lot more involved than I think in all sorts of unforeseen ways", and this both puts me into the right frame of mind (long slog, not 'quick and easy') and I'm occasionally pleasantly surprised when it turns out wrong
I nearly laughed out loud here. Good one.
If he needed any defense: that things are going to be more complicated than one thinks in the first place (especially the first time you do it) is always true.
And I think he laid out quite a few arguments in his post why one should do it (start) anyway.
(Don't want to spam but just in case this gets missed in another comment: connecting Scott's retroactive grant with the following seems very beneficial: the xprize foundation)
I think my main mistake here was underestimating the number of applicants. I made a bet with Oliver Habryka that I'd get fewer than (I think the number was) 60. I ended up getting 656. I don't know why I was so wrong. I implicitly figured this seemed harder than book reviews and I got about 100 book review entries. But maybe the lure of "easy" money attracted people who otherwise wouldn't participate in ACX stuff. Certainly some applicants didn't seem to understand what ACX was.
I will note that although I obviously had less investment in the subject, I was also surprised when I heard you got that many applicants. I thought that seemed like a lot more people than I'd expect to think they could make a reasonable case for you to give them money for stuff.
ETA: I think I would have bet against you for <60 though. I didn't give a lot of thought in advance to how many applicants I expected you to get, but when I try to think of what number of applicants would have been least surprising to me, I think that would have been somewhere in the range of 100-150.
maybe you got 656 applicants because no matter how badly the application was written, actually not procrastinating but writing a proposal and hitting send was a form of therapy for the applicants? Some ideas are nice to think of, but actually have a field to express the idea, see what the idea looks like written down, and know there was low risk but high potential return to see if the idea-seed catches or not, in itself could have been a benefit to the applicant...
AFAICT "constantly underestimates how difficult things will be" is a common attribute of people who actually get things done (often by delegating the hard work to someone who can't say no, admittedly). Those of us who predict in advance that things will be difficult often give up before we start.
See also IRB.
I think most non-trivial new projects turn out to be more complicated than expected for most people.
Also consider that when things are very easy, I'm less likely to write a post saying so.
Well, maybe you should think about doing that more often!
I'm a CPA, and you should probably talk to a tax lawyer if you haven't already. The US has a Federal estate, gift, and trust tax that may kick in if you give more than a couple million over your lifetime. It may be yet another continent of angry cannibals.
Scott knows, and I think the threshold is much higher than that.
It seems like my "let Scott Alexander handle the busywork" approach to micro-grants is not sustainable, then?
As an academic scientist with a lot of (mostly negative, with the remainder mostly puzzling) experiences applying for federal grants, there's a lot of interest here. Federal agencies tend not to describe their grant-reviewing experiences with such honesty.
I have a few remarks on lesson 4:
> and then my grants program would get really famous for funding such a great thing
I know some programs that veered very hard in this direction in an ostensible attempt to become more established and build their reputations. I caution against this, because people can tell when you're just going for name association rather than actually doing anything worthwhile.
> Or suppose some promising young college kid asks you for a grant to pursue their cool project.
I think Tyler would also recommend considering the marginal impact of your grant dollars. Giving somebody their first chance has a much larger potential upside than funding an existing effort.
Finally, one point that isn't emphasized here: especially when it comes to basic research, being afraid of "failure" (in the sense of a project not being successful) is counterproductive. If anything, basic research grants should be targeting a certain failure percentage, or else they won't fund enough novel ideas with potentially huge long-term payoffs. (This Works in Progress essay discusses a related idea, "Demanding null results": https://www.worksinprogress.co/issue/escaping-sciences-paradox/.) Of course, for some of the AI or x-risk proposals here, "failure" could have significant negative externalities, which is different than not finding a new drug and has to be handled more carefully.
Peter Norvig has said that (at least in engineering) you should aim to be wrong half the time, to maximise your learning rate: https://slate.com/news-and-politics/2010/08/error-message-google-research-director-peter-norvig-on-being-wrong.html
Fantastic post, thanks for writing it all out !
>Church has seven zillion grad students, and is extremely nice, and is bad at saying no to people, and so half the biology startups in the world are advised by him
In fact, the conflict of interest section on papers he's on would be too long, so he made a webpage to list them and just links to that.
https://arep.med.harvard.edu/gmc/tech.html
(See also: https://twitter.com/ggronvall/status/991300734774923264)
I have yet to encounter a professional opportunity where I haven't been working with someone from the Church lab.
Even at my partner's defense earlier this week- ended with a probe independent spatial seq project done with... someone from the Church lab (Gu lab at IPD/super cool)
Here's a third option between starting a micro grants program and donating to an EA powerhouse 501c3. Go to a public school with an impoverished population. Ask them what they need. Find a local 501c3 that can fulfill that need. Put them together and ask for a plan. if you have confidence in the plan, the people, and the partnership, fund a pilot. If the pilot works, go from there. You will have created something that BUT FOR you would not have happened. Of course, I'm making it sound easier than it really is. And you need to be choosy and lucky. But inspired principals and inspired executive directors of smaller, local charities are out there. Waiting to be connected and a need funded.
I work in grant administration and these are some of my favorite projects to work on! So heartwarming when things go well, and even when it doesn't work out long-term there's usually at least some positive impact.
One thing that stands out to me is how much you benefited from connections and knowing people.
Makes me wonder if one of the most effective things one can do is simply to promote schmoozing amoung EA types. I know that since Oxford stopped their EA lectures I don't know where to go in academia to do that and I wonder if it's a broader problem.
If I had the time/management skills I'd submit a grant request somewhere just to do a continuing EA lectures series in some fancy philosophy/CS/math department.
This has long been seen as a really important factor in all sorts of non-linear gains from cooperation, eg in the Silicon Valley VC network, but it's not obvious how to encourage it artificially because you immediately run into bad actors and bad incentives.
I think you might be able to do this kind of thing "manually" as an individual but that just sort of brings you back to the original problem.
This is a bit offtopic but I'm thinking that it might be worth it to encourage more self-favoratism amoung rationalists (jobs, networking etc). Yes, there are costs to this but it seems like something all really effective social movements seem to do and might be worth it.
Does Oxford not have any kind of EA group now?
I don't know if the group itself still exists but the colloquia no longer do.
Having recently started making donations – not microgrants, just donating to established organizations, but attempting to identify underfunded groups where incremental dollars will make the most difference – this is so spot on.
The question I've especially been wrestling with is how to understand whether a particular organization actually needs your dollars, and how many of them, considering that they will also be raising from many other donors. It seems like a fundamentally intractable problem when a large number of donors / funding sources are attempting to make independent decisions (which is mostly the only option available under the current system). I wrote up some thoughts about the problem here, would love feedback: https://climateer.substack.com/p/philanthropy.
> The problem is: this grants program could be the most important thing I’ll ever do.
No, it's not. The chances that your $60K are going to be the difference between utopia and a thousand years of darkness are negligible. In terms of strict value for the money, you'd be better of finding six random hobos and giving them $10K each. However, this is the classic tragedy of the commons: giving $60K to hobos is the rational choice, but if everyone took the rational choice, we'd still be stuck trading slaves for coconuts, instead of flying drones on Mars. So, yes, what you do is important... just not so important that you should dedicate your entire life or reputation or fortune to it; nor is it important enough to endlessly obsess over. Half-assing the job is probably the right move.
Scott's mentioned retroactive grants mechanisms are very similar to what xprize are doing https://en.m.wikipedia.org/wiki/X_Prize_Foundation
I mentioned it in a comment yesterday on the polymarket post. Seems even more relevant now. I think connecting the two concepts and persons would be very beneficial
Re: Corporate Babble. Once I was reading a forum, and someone posted a thread asking for feedback on their pitch document for something related to the forum's topic. I took a brief look and told them that it looked like buzzword soup instead of actual information.
I expected them to be upset with me. I figured that someone does corporate babble because either:
(A) they believe it works better than actually explaining stuff, in which case their first thought will be something like "that's on purpose, dumbass", or
(B) they don't actually KNOW the information that they are pretending to explain, in which case they will be embarrassed to have been caught in what is effectively an act of fraud, and will likely try to bluster their way out
So I was rather gobsmacked when they replied with something like "yeah, that's a fair criticism; how can I improve on that front?"
(And I had no freaking clue what to tell them! I don't have any models for how corporate babble happens as a locally-correctable error! To this day, I'm honestly not sure whether what they really meant was "how can I *disguise* that better?" I guess I probably should have asked follow-up questions at the time...)
That's an interesting question. If someone doesn't know what "explain concretely" means, how can they find out?
My theory is that a person in this situation has been caught in the "learning for the test" loop. I'd expect that, if they were put in a situation where they had to coordinate with others to produce some concrete result, they'd figure it out. Some situation where it doesn't matter what anybody says -- it matters if the thing actually works.
I'm a web developer, so I encounter very few buzzwords. Because if you hand a web developer a document full of words, and they don't know one of them, they'll ask, "What does this mean?" and "What are the acceptance criteria?" ie, what measurable result should I deliver to you?
If I get a word like "synergy" and nobody gives me any measurable acceptance criteria, I'll put a check box on my to do list that says "synergize", mark it done and move on to real work. Part of what more senior web developers have to do is train and assist the product/design/marketing/sales people around them to render requirements concretely.
So I think working my job for a year or so would be a pretty effective cure, if somebody didn't know how to tell if some text was explaining a thing concretely or not.
Having received a grant, reading this made me really anxious. Internally comparing the outcome of your project (even in the best case scenario) with X lives saved surely creates pressure. Idk whether it is right to feel this way and actually all of one`s expenses should be weighed in this way (eg, buying a new car or saving 4 lives?) as to put things into perspective or whether this in the end creates a dysfunctional amount of anxiety, actually lowering your chances of success. Same reasoning should apply for the grant selection process I suppose.
Scott writes: “How can big foundations be short of good opportunities when the world is so full of problems? This remains kind of mysterious to me”
I believe I can solve that one for you/add some reasons besides the one you give.
My reference is to government development cooperation, not charity-based altruism, but the problems are bound to be similar. (Actually, I think Scott knowns the reasons very well, but acts ignorant to come across as modest and not an arrogant know-it-all. Which is a very sympathetic character trait.)
If you want to solve the problems in the world, you run into two practical problems:
1) The people with the largest problems in the world do not have any organisations, or anything else, you can “attach your money” to. That is one of their problems. So you have to rely on middlemen.
2) Money aimed at solving other people’s problems where you do not want anything tangible in return, attracts a lot of fake middlemen that mimic the behaviour of real middlemen. You try to screen the middlemen to find the real ones, but this is a signals arms race, where fake middlemen are incentivized to improve their mimicking behaviour as you develop better screening abilities. It’s a principal-agent problem. Everything is. (As your blog post illustrates.)
To illustrate. You want to solve the problems of the people with the most problems in the world? They are actually easy to identify. 1) go to a low-income country. 2) Go to a rural & remote area in the country. 3) Locate a minority ethnic group in that area. 4) Locate the single mothers/abandoned wives/widows in that ethnic group. 5) Locate the ones with daughters. 6) Narrow in on the daughters with disabilities. Those daughters are the people with the largest problems in the world.
So how do you reach them to help them solve their problems, assuming they are still alive? (Putting a parenthesis, for the sake of argument, around the fact that they are seldom around for very long, which is one of their problems.)
You cannot reach then directly, since one of their problems is they have no way you can reach them directly. You have to rely on locals who know the community, preferably someone that are part of it. Usually, it is a problem that the community itself is so destitute they have no-one you can reach, so you have to rely on some not-fully-local, or more likely a whole chain of further-and-further-away middlemen from the daughters themselves, i-.e. a chain of intermediate agents, each with their own screening-and-signalling game attached. They are usually NGOs and CBOs of some sort.
It is enough that there is a bad apple in one of the links in that chain that your screening abilities fail to detect, for the whole thing to go wrong.
Oh, but you can get around that by relying on indicators a la the GAVI alliance, can’t you? Well, when a measure becomes a target it invites gaming as you point out, and you are back in the principal-agent signalling arms race.
Maybe the Effective Altruism people have gotten around these issues. If so, I would love to hear how! And I do not mean that cynically. I would really love links to websites, or articles, dealing practically with this problem. Since the government cooperation agency I am familiar with has not, despite probably having vastly more resources to do detection work than an average charity (apart from the biggest US ones).
For the record, I am not in the development cooperation business myself. (Apart from a brief stint in the late 1990s when I was assigned by my government to advice an Asian country grappling with how to set up efficient welfare policies in the aftermath of the Asian economic crisis.) My knowledge stems from teaching stuff like this in a master course, plus gossip over the years from friends who make a living in the government cooperation business. Friends who introduced me to acronyms such as MONGOs (My Own Non-Governmental Organisation) and GONGOs (Government-Owned Non-Governmental Organizations).
I don't think I "know the reasons very well". How does this apply to not giving money to Doctors Without Borders?
I meant that you know the principal-agent problems related to identifying good projects for funding. Which your blog post implicitly illustrates in good and also entertaining ways.
..my point was that these problems equally bedevil large organisations, with more money, who try to identify good projects for funding. Ok, they are bigger and may hence have more resources for screening, but they are also more exposed to fakers, because they are juicier targets.
The problem is larger if you want to fund projects in places where you do not have local knowledge. Unfortunately, most people with large problems live in other countries than most people who have money that can fix problems. This goes for large donors as well as small donors. Both usually lack local knowledge.
...Bear in mind that you usually need local knowledge not only of a country, but of particular places and communities inside countries. The more problems they have in a place (daughters with disabilities of single mothers in rural ethnic minorities as an example), the more you would like to fund something that helps, but at the same time the less likely you are to have local knowledge yourself.
This certainly does not mean that offering help is impossible. For example, donors often rely on reputations to screen those local NGOs and CBOs they can trust. Because they can rationally have reason to believe that the organisation they are dealing with, provided it has carried the costs of being honest in previous deals with the likes of you or your organisation, will not take the risk of losing its entire reputation by cheating you. This stems from the fact that reputations are built up gradually (by abstaining from cheating), but can be lost in one stroke.
Trusting Doctors without Borders not to cheat those who donate to them is probably a sound strategy, for this and related reasons. However, there are many NGOs and CBOs (most of them I would argue) whose work is much more difficult to evaluate. Perhaps particularly local ones, but that is hard to say.
Great write-up, thank you. There is something out there that resembles somewhat your last idea, albeit for government: https://en.wikipedia.org/wiki/Social_impact_bond
> That is, funders give them lots of money, they’ve already funded most of the charities they think are good up to the level those charities can easily absorb, and now they’re waiting for new people to start new good charities so they can fund those too
Meanwhile, funding open source projects used by almost the whole industry is apparently still unattainable rocket science.
That's a good point. I don't know whether anyone even had it together to write a grant application on the subject.
I applied and was not successful.
Is there a simple explanation for how there's apparently a glut of money in charity, and Against Malaria Foundation can still save a life for $5000, a claim which I've been seeing for many years now? Surely there's no shortage of billionaires willing to do that, so why doesn't the price to save a life go up?
There is glut of money in charity among people unaware of or refusing to donate to Against Malaria Foundation.
AKA effective altruism is not known/accepted widely.
There is no shortage of Africans dying of malaria, but if you look up numbers from 5 years ago AMF used to be able to save a life for 3k (https://www.businessinsider.com/the-worlds-best-charity-can-save-a-life-for-333706-and-thats-a-steal-2015-7) , so the price did go up.
But mostly it's that people are much more excited about trying to go for non-linear gains.
> five paragraphs explaining why depression was a very serious disease, then a sentence or two saying they were thinking of fighting it with some kind of web app or something.
Sounds like most grant proposals I've seen people write, and get approved.
I think second-order effects of this process are worth mentioning. A lot of VCs and rich people read this blog, so if you inspire some of them to do their own microgrants, you can surely move orders of magnitude more money.
It also serves to start a discussion in these circles, helping others who need it to become better at this kind of thing and helping you become better at this too, so that next time you can do an even better job. I think a lot of people agreed with your choice of charities, as most are awesome and inspiring. Hope we get updates later too on how it's going. People have different values and I imagine it's way more exciting to fund moonshots than the "boring" choice of giving it all to AMF.
Also, if doing it once was worth it despite all the headaches involved, doing it a second time should be way easier, especially if you get help and prepare.
I also expect a second microgrants program to get fewer applicants, considering most were already evaluated in the first round, so it shouldn't be that hard second time.
Still, thanks for doing this. Hope some of them work out.
V here seems to be a much bigger condemnation of the American banking system than you might have realised.
Here in the UK, I can transfer to any other UK bank account, given the sort code (a six-digit numeric code that represents the bank that the account is located in) and the account number (which are always eight-digit numbers).
There are three different systems:
BACS, which is almost free (a standard per-transaction fee of 35p), has no upper limit on the amount transferred, and takes three days.
CHAPS, which is same-day guaranteed, has no upper limit on the amount transferred, but is much more expensive (£25-35 per transaction).
FPS, which is same-day but not guaranteed (it falls back to BACS in some rare situations), has an upper limit which was £150,000 until recently but is now £1,000,000 (pandemic change, though now made permanent), and has similarly small fees to BACS (45p per transaction).
Personal bank accounts normally have free BACS/FPS transfers (ie the bank eats the fees, though they get bulk rates, so they are paying a lot less), business accounts normally pay modest fees. I use BACS and FPS all the time, e.g. to transfer money to friends much like Americans use Venmo. I have used CHAPS exactly once, which was to transfer the money from my mortgage to the vendor when I bought my house. It tends to only be used for very large transactions where timing is very important (like buying a house).
Doing a series of large transactions like paying out grants would require a business account, would require paying fees - but modest ones - and would require contacting the bank so that a large number of payment doesn't trigger their fraud alerting. But the other hassles you had to deal with would just not arise.
American banking is notorious (to Europeans) as being backward. We almost never use cheques, Venmo is pointless here because you can just use BACS (or the other equivalent domestic systems in other countries), or SWIFT for international transfers. You just need their account details / SWIFT code and you transfer directly to a bank account using the app on your phone.
Even for bigger transactions - I work for a big enough organisation that we have recently hired someone to work full-time on sending out and receiving money in the US because the US banking system is so difficult to deal with.
Just to flag up why this system was successful: it was originally founded by 16 large banks in the 1980s. That covered over 90% of UK bank accounts from foundation.
Other banks either join the system as full members or they contract with a big bank to run transactions through that big bank.
In 1996, the "Truck Acts" (the laws that banned companies paying in scrip that could only be spent at the company store) were amended so that employers could refuse to pay in cash provided they would offer to pay via BACS. The result was that the few small banks that didn't support BACS immediately moved to do so, because their customers wouldn't be able to get paid.
Since there's so much funding available for AI-related ventures, this got me thinking that some of the ACX grant applications will probably be more in need of staffing than grants at this point. If you're looking for an employee for your AI research organization, I want to work for you! I'm finishing grad school this year and am currently looking for a job doing data science, ML engineering, AI safety research, or anything related. I'm also potentially interested in helping with any projects of the sort that would be pitched as an ACX grant application. I think I get notified of replies here but I can also be reached by email at robirahman@g.harvard.edu. (Sorry in advance if this kind of comment is only allowed on classified threads; no offense taken if it gets removed.)
ACX grants are relatively small, so recipients probably have the manpower they need already. And it sounds like Scott prefers not to fund AI work, as other orgs already specialize in that.
Is there a plan to measure the success of each of the grants to determine which ones were successful, moderately successful, unsuccessful? I'm unlikely to run a large-scale grant program, but I am interested in the problem of how to effectively improve the lives of those in my immediate community, as well as those in the broader country/world through small grants.
Perhaps we're not aligned on this, but maybe there are others who read this blog who would like to know how to best distribute their surplus in a way that will persist after the initial infusion of cash is given. Where can my extra dollar do the most good for others? I feel like paying off a credit card debt isn't a good use for this, since it feels about as effective as chasing rats out of the house. It looks good for a minute, but they'll just come in around the back anyway. It's not solving a problem, so much as pushing the problem back a few months. I'm really looking to answer the question, what can I do that will change things - even if in a limited way?
I guess the answer to the question is "probably not a grants program", because a grant would require a deeper knowledge of the subject and circumstances than I can give.
I think your experience here outlines a major objection to the Rationality movement - the information needed to calculate what the "best thing" to do isn't available. Or, at least, isn't available at the cost the difference in information would make. That is, if you had to choose between paying market-rate for the free specialist consulting you got or instead having to do without, the cost you would have to pay would have been larger than the difference the specialist knowledge made in terms of effectiveness of grant distribution. So (benefits of uninformed selection) > (benefits of informed selection + cost of specialist knowledge). This works when you are able to get specialist knowledge for free.
For the typical person dealing with typical decisions, the cost of specialist knowledge will exceed the cost of the benefits of informed selection. Meaning that uninformed decision-making will be the rational choice.
Thus being irRational is rational.
An interesting project would be to create a crowdsourced approach to grant funding clearing. Imagine a site where someone seeking a grant could upload their proposal and then pre-approved volunteers with relevant experience in the relevant domain could review it for the obvious problems. Amazon's Mechanical Turk is almost what you want for this, notwithstanding the initial credentialing problem.
I'm surprised nobody in these comments have mentioned the Heilmeier Catechism, so I'm mentioning it here: https://www.darpa.mil/work-with-us/heilmeier-catechism
Many years ago, I read a piece by Bill Gates in the NYT, about what he had learned in 5 years in his non-profit work aimed at eradicating malaria etc. He basically said, he had failed.
It was humble, transparent and introspective. I tried to look but could not find the essay. I think there is much to learn there, on how to figure out where to invest your money, to help people the most.
It seems to be surprisingly hard to give responsibly.
> I take it all back. The crypto future can’t come soon enough. Sending money is terrible.
Why not go the "crypto" (that is to say, cryptocurrency) route? Buying (e.g.) bitcoin is not generally hard (for the amounts I tried so far, which are four orders of magnitude smaller), and once you have it in a wallet you actually control (i.e. have the private key for), there are no practical limitations on how much you can transfer.
I can think of a few possible reasons (all of them with a question mark):
* Buying bitcoin for millions of dollars is hard to do in the US
* Doing large BTC transactions will get the IRS on you or get you into trouble with money laundering laws
* The recipients were reluctant to accept bitcoin for legal (see above) or organisational ("we are a respectable university department, we don't do bitcoin") reasons
* Computer security concerns
* Usability concerns (e.g. sending grants to invalid addresses due to typos)
Of the ACX ++ grants, it seems that a total of three applications include a bitcoin address, which I would consider convenient for donating smaller amounts (as opposed to contacting strangers using a publicly stated email address and just hoping their mail account opsec is good enough and that they won't guilt-trip you into donating more than you wanted to or anything).
<i>(2) Most people are terrible, terrible, TERRIBLE grantwriters</i>
I do grant writing for nonprofits, public agencies, and some research-based businesses: http://www.seliger.com. The fact you've noticed is why we have a business. Effective grant writing is extremely hard, and most people aren't good at it. ACX grants are relatively small, and I often work on projects in the $500k - $5 million range; we work on a flat-fee basis, typically in the $5,000 – $15,000 range.
Many people who are great at what they do are terrible at grant writing.
are there not fees with crypto transactions? both the fees of the actual transaction, plus the fees of converting dollars into, then out of, the currency?
I think you're misusing "comparative advantage" in a way that may be important.
Comparative advantage is relevant when comparing two resources that can't be directly exchanged for each other but can be used to produce the same outputs. In that case, even if one resource is less productive when producing either output, there are gains to trade as long as the relative productivities of the resources differ. In the classic example: I can use an hour of my time to make one fusili sculpture or to make two cucumber pizzas. You can use an hour of your time to make two fusili sculptures or three cucumber pizzas. Your time is more productive than mine at everything (absolute advantage), but I only give up half a fusili sculpture by making a pizza, whereas you give up 2/3 of a fusili scupture, so I have a comparative advantage in pizza making.
In this case, the crucial resource is money, and it's directly exchangable. If you think that givewell is better at grantmaking than you, you can give them your money. If Scott Aaronson is almost as good as Givewell at identifying STEM education grants, and much worse at identifying global health grants, he has a comparative advantage in STEM relative to Givewell. But he'd still be better off giving his money to givewell than making grants himself, even if all of it was for stem education.
Maybe the equivalent is to imagine that you could buy a $200 blender that could puree a soup in 10 minutes or crush a pound of ice in 15 minutes, or a $200 blender that could puree a soup in 30 minutes or crush a pound of ice in 20 minutes. The second blender has a comparative advantage in ice crushing, but since it's absolutely worse at both tasks, you should never buy it.
The lesson here is: only make your own grants if there's nobody who you can give the money too that could make better use of it than you can, in an absolute sense.
Maybe he should have said "relative advantage" or "better ability"?
I think he should have just used "advantage," or even "absolute advantage." I'd be fine with "better ability" as well. The rule I would argue for would be something like: "Make your own grants if you think that you can direct money to solve a problem more effectively than anyone else you can give money to." The problem is that he explicitly contrasts his use of "comparative advantage" with "absolute advantage."
> so I jury-rigged a Biology Grants Committee out of ...
FWIW, I think you mean "jerry-rigged". See: https://www.dictionary.com/e/jury-rigged-vs-jerry-rigged/
>And that's an easy one. What about B? If the professor figures out important things about what influences gender norms, maybe we can subtly put our finger on the scale.
She won't. She's a woke loony and her demented ideology will prevent her from ever producing any meaningful insight into the world.
Maybe there are heritable population differences that explain most of the variation. Maybe it's a result of industrialization, and heritable factors affect whether/when a population can undergo industrialization. Maybe these things are true, maybe they aren't, but Alice Evans will never in a million years discover that they're true because her ideology prevents her from even thinking about the world in this way to look there. And if somehow this leaps out at her, she's sure as hell never going to put that in a book.
Am I the only one who sees a paradox in:
"If i miss allocate 5000$ I am basically killing someone in africa" and "Big foundations have more money than opportunities"?
I found this whole post really affecting, and it has rejuvenated my respect and interest in ACX. But it also left me with two conflicting realizations.
It helped me understand why I've always been wary of utilitarianism as a moral code and effective altruism as a practice. Firstly, because it's kind of counter to the human moral instinct - which is just to do what you think right at the time. And secondly because it's so bloody difficult - almost impossible in fact - to sort out all the confounding variables and unknown cascading second order effects.
But it also convinced me that if I was going to give a lot of money to a grant-giver I'd want it to be Scott. That level of genuine concern combined with serious questioning is pretty rare. And I think good can come out of this if it can come out of anything.
My Heuristic That Almost Always Works is - everything is far more complicated than you think it is
Thanks for sharing your grantmaking experiene. I wish other grants programs did more of this.
Please do a follow-up for the ACX Grants++
How many got funding? How many got some other help?
A vague thought... I wonder how many of these grant proposals basically reduce to "Give one or more people a basic income guarantee for a while, so they can buy food while they focus on whatever they naturally want to work on." And would therefore be moot in a world in which UBI exists.
If that's a real class of grant proposals, could one build a useful heuristic around that class? Like, if solving a problem requires money for reasons other than feeding the people who are already enthusiastic about working on it, does that somehow make a targeted grant more beneficial than otherwise?
I work in grants (both proposals and post-award administration) and have some suggestions on how to cut down on the garbage applications. The application form is no longer viewable so I couldn't check - you may have been doing some or all of these things already.
* Require an itemized budget as part of the application.
* Ask applicants to answer several specific questions. Instead of just "why should we give you $," break it into sub-questions, and place a word limit on each. Common things to ask about are why the topic area is important, what expertise the applicant brings, what makes the project uniquely likely to make a difference, and how the money will be spent (this is often a separate document called a budget justification). This will cut down on needing to read through pages on why depression is bad followed by "we're gonna make an app."
* If there are certain things you will definitely not fund, state those up front. Talk to a lawyer about this beforehand as there are some weird rules around international giving in particular.
* Have applicants indicate whether they are applying as an individual or an organization. If as an organization, require proof that the organization actually exists.
* If you plan on imposing any conditions on the funding, state that up front. Common ones include not paying until you receive an itemized invoice of expenses incurred, requiring progress reports and/or a final report at the end of the grant period, and limiting the recipient's ability to re-allocate the funds. That said, more conditions = more work for you, so for this type of program it's totally reasonable to just throw cash at people with no strings attached.
Why aren't grant proposals public? If all grants were put on a public website, like the EA forum, and funders could look through them, and anyone could contribute their expertise, what would be the downside?
Regarding EA having too much money and not enough ideas to fund -
There are a lot of smart people who I’m sure would love to dedicate their minds / career / time to solving tough EA adjacent problems. However, if you’re in a top tier job / career you don’t have the time / mental energy to get close to solving or even thinking about solving some big problems. Based on the too much money thing it seems like “earning to give” does not have the same value it maybe use to? It would be cool to see some sort of EA charity start up incubator that would pay good salaries and could lure talented people away from top tier jobs and give them time to tinker / build / think up solutions for solving problems that meet EA criteria. Would be classed as a high risk fund from an EA perspective but how beneficial would it be to discover / build another charity that has impact on par with the against malaria foundation?
Hey! I was thinking about “Because you have a comparative advantage in soliciting proposals”
I’ve also long thought about diversity and representation problems in EA/entrepreneurship/funding/many of the adjacent spaces.
So I had this idea: what if you tried to source proposals from interesting people throughout the world, the developing world in particular. The goal would be to find suitable candidates that had never even considered the possibility of a grant. The invitation could be as simple as guaranteeing that their proposal would get read.
There’s a couple ways you could do this, here’s 2 that come up for me:
1. Ask people who have done a lot of traveling “have you met anyone while traveling that are XYZ?” Where XYZ are factors that might indicate your values. I personally might say something like “unassuming/humble, community oriented, generous, smart, potential for impact”. In my travels I’ve met several such people, like https://notmadyet.com/ and http://gogreatergood.com (both of whom are EA aligned!), and more. There’s plenty of more well known travel bloggers too.
2. Ask people who have spent real time abroad doing humanitarian work the same question. I first thought of people that have taught English abroad, done habitat for humanity, etc. If you did a survey of former peace corp workers of “name up to 3 people you thought could do a lot with this grant”, any names that came up more than once would likely be pretty interesting candidates, in my opinion.
In this model, there’s a middle source layer. You’re relying on their judgment to be reasonably sound, then to have spent enough time somewhere to have more than a passing understanding of the landscape, and for them to roughly values aligned.
There’s near infinite possible problems with this. Yet given the typical value of diversity, deeply connecting with “users”/“customers”/beneficiaries of ones work for feedback, and context awareness (I think you could reasonably call these candidates “subject matter experts” in the issues facing their local communities, at least), I think this would be a stream worth at least experimenting with.
I’m happy to do a bunch of this work, or at least get it started, if you (or another grant maker) is interested.
I think there is an explore/exploit trade-off things going on with charitable giving. I think it is a reasonable approach to put a percentage of your money into 'explore' activities and a percentage into 'exploit' ones - nets to prevent malaria obviously being in the latter category for example, whereas it feels like the strength of a grants programme like this is to support 'explore' activities, some of which you are going to expect to fail.
I've been a referee on small grant proposals and it is hard though - everything you say resonates!