Funding something retroactively should use no more resources than funding it proactively. You just have a different chance of getting the result you want. (If investors are much smarter than you, you have a better chance with retroactive funding. )
But sure, if you offer huge amounts of retroactive funding for doing silly things, it will waste resources. Anyone moderately capable of recognizing utilitarian good, and funding proportional to that, should come out way ahead.
From mildly interested reader: it sound highly complex and I am not entirely sure why someone would invest and how risk of not getting paid is for people participating in projects.
Are they paid and they work and investors risk money? How investors can earn extra money to offset risk?
The idea is if you fund a $1million project with $5 million charitable benefit and it succeeds, then the oracle funder pays *you* $5 million (or buys your credit shares for a floor price of $5 million) in retroactive compensation.
The people doing the project would either hang on to some of the shares or take a salary. The whole idea is to transplant the VC model to charity.
I am confused as to why the oracle funder would pay out the $5 million. Presumably there is no contract, so it would be out of the oracle funder's good will or based on a promise? Generally, I would not hold my breath for such a promise to be kept with no legal repercussions. Also, couldn't this money be better used for another problem that hasn't been solved? Seems like a waste of $5 million for charitable causes unless the recipient pays this forward to the next problem.
Suppose there are 10 projects asking for $1 million, and 9 of them are useless. The funder can't tell which is which. (until afterwards looking at results) But the investors can. So the funder promises to pay $2 to any investor that funds the one project that actually works.
I'm a bit baffled by the extended discussion of how an investor could somehow own the credit for the good work – you point this out briefly early on but then in a later section it seems to be taken as possible that a 100% equity holder could be treated as having done whatever the charitable thing itself is (beyond merely having funded it). Surely that kind of social consideration arises from interactions like interviews and accounts of how the work was actually done, such that nobody would seriously conflate the two. At best I can imagine perhaps someone wants to give the charity founder an award and they contractually have to ask that it goes to the funder instead, but I think the audience at that awards ceremony would likely find it strange and awkward.
The optimistic take is that the EA space is possibly one of the few where you can seriously say "well I bought the credit for this by investing early" and then everyone of your peers agrees to credit you. (Why would they do this? Because they want the market to work.)
If you bought the tokens of a project that then turned out to successfully do something good and valuable, you get, a) profits, and b) some social/moral credit for having correctly predicted that project would succeed, and for caring about helping that problem get solved (although this gets diluted the more profit you made). The market works just fine on these terms - and no real person will ever think that you get the credit for coming up with the successful idea or for executing it well. Anyone who wants to do a similar project and wants advice/someone to run the project will obviously go talk to the founder, not to you.
This concern is nonsensical. You cannot buy third party's awareness of who actually did what, simply by buying shares in the possible financial profits.
The concern is incentivizing the oracular funder, not the investor, since they're the ones who end up owning the credit and need to want to pay for it for this to work. They won't be getting any profits, so there needs to be some other benefit for them.
Charitable foundations whose reason for existing is to fund beneficial projects do not typically need to be incentivized to do the thing they're trying to do anyway, but more effectively. They are already being judged (and donations being made to them) on the basis of whether they are effectively allocating money - and they will get social/moral/effective altruism credit for having *funded* good projects and enabled those projects to happen. They will not get credit in anyone's mind for having *done* those projects.
To me, it seems like the point of doing this is to bring in people/perspectives/information from people who don't already care. The focus on the EA space doesn't fulfill that here.
Agreed here. I think the problem this is trying to solve is that currently oracular funders have poor information. So the proposed solution in this post is a market mechanism that would combine all the information together thereby reducing waste and improving asset allocation. In order to succeed in doing that, the market mechanism must incentivize a wide array of participants. My gut says the proposal around social credit would not attract sufficient number of market participants and would also attract a very narrow/specific segment.
Sorry for being late to the party, but I fail to understand the central argument: why would anyone bother to buy the credit? Even without the premium this would hardly make any common sense.
In „normal“ markets patents, companies or brands are bought with an expectation to gain more profits with these later. It makes sense as an investment.
Products and services are bought as a means to an end. It makes sense as a purchase (or as an investment, too).
Buying the credit after the fact seems to me to make sense neither as an investment nor as a purchase. Just „bragging rights“?!
I think the same reason any charity funds any project (i.e. some combination of altruism, fuzzies, and bragging rights). The proposal is that, instead of the current system where the charity gives money in advance to fund a project that may or may not work, they give the money afterwards to a project that did work.
Opportunity cost: Why would a charity pay 5 M$ retroactively for a 1 M$ project, giving up at least 4 M$ to fund other projects?
Definition of charity: If a charity is generating profits for private investors, is it still a charity? Or selling of indulgences?
Bragging rights: Even beyond the point of doubtful bragging rights, aren't you diluting the signal of the original contributor? Don't we want to have them the bulk of the reputation in the hopes, that their hopefully good future ideas are funded again?
This market seems to lack a credible buyer (beyond the initial motivated creators of the market).
> And it's only a "$1m project" in the sense that a $500 pair of designer shoes whose raw materials cost $20 is a "$20 pair of shoes"
Yeah, that's how *non-profit* works. The project is: costs including salaries + 0 USD. No excess of revenues over expenses. That's what makes it different from business.
It's part of the reason why people are willing to spend money on charities or work there for a less-than-competitive salary, effectively donating their time.
Please, let's not get into ideological 'market is better than state' or vice versa arguments, I'm tired of ideological arguments on 'market is great' or 'market sucks'. Especially when they come as a side effect of whichever other topic is discussed.
I didn't compare non-profit and profit for effect and didn't intend to do so. Whether you define 'a 1M project' as '1M is what people would pay for it' or '1M is the actual costs of the project' is, well, a matter of definition. But it's worth noting that one is typical for a market logic and the other one is typical for non-profit logic - and that there are two very different groups out there, who function primarily according to one of those (and sometimes despise the other).
I understand the argument, but I fail to understand how a charitable project can be valued higher than its actual cost (similar to the shoe example). It still seems to upend the definition of charity. Some other commenter here asked the same question: How is that value defined?
Is it all just banking on the vanity of billionaires and charity funds? If so, whats the real fundamental difference to today? A billionaire fund can already enjoy the glory of the Good Deed. The only addition would be to funnel away some charity money into private hands (and attempt to slightly shift who gets to bath in the glory, but thats just a technicality and no difference in principle).
> I understand the argument, but I fail to understand how a charitable project can be valued higher than its actual cost (similar to the shoe example).
Lets assume that one may distribute life-saving medicine at cost of 5$ per person, with medicine cost 5$ per unit, with 1 life saved per 10 distributed units, with all overheads doubling cost.
This unrealistically optimistic case would give 200$ per saved life.
I think that we could find plenty of people willing to pay over 200$ per saved life, therefore reaching case where charitable project can be valued higher than its actual cost.
>Is it all just banking on the vanity of billionaires and charity funds? If so, whats the real fundamental difference to today?
The difference is that the charity is not taking on the risk of allocating funds and is not expected to singlehandedly know what projects should be funded for maximum impact. Private invididuals/funds who think they are better at evaluating the charitable impact of a project take on the risk, and the charity only spends money on stuff that works.
Compare to a question like "why bother punish criminals for crimes, when they've already committed the crime and the punishment won't reverse that?" Although the punishment itself doesn't help, the fact that everyone expects crime to be punished decreases crime, and people won't expect that unless you follow through. So punishment "causes" less crime, even though it never directly changes the behavior of the specific crime it's applied to.
In the same way, if you accept the arguments I gave at the beginning for why funding charity through impact certificates is better than the normal way, by buying the certificates (ie rewarding good purchases), you're causing the impact certificate system to exist, and causing more charity. That is, the system only works insofar as everyone expects their charitable projects to be rewarded, and the best way to produce that expectation is to make it true.
To put it another way: suppose a charity credibly promises to buy impact certificates. Lured by that promise, an investor funds a good project and makes it happen. The charity's promise has caused the good project to happen. Then the charity has to decide whether or not to keep its promise. If it breaks the promise, the good thing will already have happened, but nobody will trust the system ever again, and they won't be able to make more good things happen through the same method.
Creating an impact market in the hopes of creating a higher rate of (effective) charity projects than existed before the market.
The tradeoff being, that due to the generation of pure profit, total expenditures rise faster than the total amount of Good done (product of number of additional projects times charity done).
If this is correct, isn't this somewhat antithetical to the idea of EA? More good is done in total, but less efficiently per dollar. A counterargument could be, that the profit generated could be invested into more charitable projects. But private investors could also just build another mansion with it.
Also:
What about gaming the market? I.e. exploiting the market by producing a problem in the first hand, like delivery of faulty or low quality medicine and then get yourself funded to fix the problem with That One Easy Trick.
What about cannibalization of existing charitable efforts by creation of the impact market? Some companies already engage in charitable efforts for example. Now that there is a market, they could be forced by these market forces to capitalize on their charitable efforts. Is this good? Is this exactly what you are looking for?
I can't get around the fact that in the end, somebody has to pay the investors. Does this really solve the problems that charity is trying to solve? Couldn't the market just devolve into an incentive to keep societies, countries, people that need charity in an everlasting state of needing charity?
Imagine all funders together are willing and able to spend sum M to make the world a better place (to create impact in areas deserving of their charity). Now, without 'intermediaries', you can spend M directly on achieving the impact, the sum available for impact I = M. Enter 'profit', in total of sum P. Now the money you can spend on I (1) (impact) = M (total money available) - P (profit).
So if you add profit, you lower the total amount of money available for achieving impact. I(1) < I (0). That's the tradeoff. The higher the profit P, the lower I(1). (given that the overall amount of money M available through funders is fixed).
The inefficiencies/efficiencies you're talking about relate to how efficiently we spend whatever I or I(1) is available.
In an ideal world, we would be able to spend all of M on I, the impact, and spend this money I in a extremely efficient way. The world is not perfect, it still makes sense to strive for max. efficiency while max. limiting profits.
The tradeoffs come into play, where more of the one cannot be achieved without more of the other.
As usually there are additional side-effects that are not included: eg. would more efficiency raise the interest of additional funders (M(1) > M) or would more profit-orientation deter other funders (M(2)<M).
Average market net profit is ~5%. I believe there are very few charities that are operating this close to private sector efficiency. There are charities whose efficiency could easily be improved by an order of magnitude or two. This is far more waste than the 5% profit margin typical in for profit companies.
> If this is correct, isn't this somewhat antithetical to the idea of EA? More good is done in total, but less efficiently per dollar.
If selection process is better than alternative selection processes it may be worth it.
For example, lest say that we have Scott with 5 million dollars in funding willing to fund this way, but unwilling to go through manual grant selection.
In such case you either get projects funded this way (with some losses) or nothing, making the first solution more effective.
Or maybe Scott should donate that 5 million to anti-malaria nets.
> If this is correct, isn't this somewhat antithetical to the idea of EA? More good is done in total, but less efficiently per dollar.
I'm pretty sure EA mostly cares about the total amount of good done. It tends to focus on efficiency per dollar because often that's easier than increasing the total number of dollars, but that's not the core goal.
This makes complete sense, but I think it's still quite counter-intuitive to pay for a good that has already been achieved. People want their money to go to creating new good things. The fact that over-all the system might be better with retroactive funding doesn't change the fact that that specific step will feel like an odd way to spend their money.
This is rather unlike punishment for crimes, where many people seem to share a visceral belief that those who commit evil acts should suffer for them.
The 'prize' idea is the closest parallel, but even in typical prizes, while the prize is given after the fact, it's committed before the fact, so at commitment time, there's still no retroactivity.
No indeed, in fact I really like the idea, I'm trying to work out why my feeling is that as presented it won't get much public interest. Perhaps it's possible to educate people to pay for goods already achieved (and to pay more than they cost) on the basis that it'll bring more good in the future, but it seems like it might be working against the grain. Even if it is, perhaps rationalists control enough charitable money to make it work.
You can look at the state of free software funding. Lots of very cool projects do make money from donations after the fact by fans, but the level of recompense most projects currently receive is far far below commercial equivalent costs, and for this to actually work, there has to be an expectation that the works will be rewarded commesurate with their value.
I'm still confused by the idea of selling equity in a moral action. Outside of Section 2, and 11a and 11c, I think the post makes sense. But the idea of that we can meaningfully transfer social credit for a successful intervention, in whole or in part, from founder & investors to the oracular funder seems totally absurd, to the point that I question whether I really understand other parts.
Say Alice the Microbiologist founds the End-Malaria-in-Senegal project. She sells 90% of the impact shares to Bob the Impact Investor for $900k, and keeps 10% for herself while investing $100k of her own money. The project ultimately succeeds, and Carol the Oracular Funder buys Alice and Bob's shares for $5 million. A week later, Alice, Bob and Carol are attending the same social gathering. People ask each of them what they've been up to.
Alice: "I founded and lead a project that successfully ended malaria in Senegal through impact equity funding. I funded 100k of the project myself, and sold my shares for 500k."
Bob: "I spent $900k to fund 90% of a project that ended malaria in Senegal, then sold my impact shares for a 5x return."
Carol: "I spent $5MM to buy 100% of the impact equity for a project that ended malaria in Senegal."
Is everyone supposed to be 0% impressed with Alice and Bob and 100% impressed with Carol? When I imagine myself among the guests, I notice that my internal moral value function is assigning the most points to Alice, and a nonzero amount to Bob as well. And if I'm the dean of the medical school where all three of them studied, Alice is the only alumna I'm excited to brag about to my social rivals at the next semiannual medical school deans' wilderness retreat.
I also notice that my feelings don't shift much if instead Alice and Bob keep half of their respective stakes after the project succeeds. I can tell myself that Carol should get 50% of the credit, Bob 45%, and Alice 5%, but I can't make myself feel that to be true in a meaningful way. Maybe I could, if I'd grown up in a world where impact markets already existed, but even then I'm skeptical. And I think Carol would realize this too, such that buying the credit for the project wouldn't be worth $5MM to her.
OTOH, if we insert another event at the beginning of the story, where Carol publicly precommits to paying $5MM to whoever can end malaria in Senegal, then everything makes sense again. In this version, Carol doesn't buy any credit from Alice. Instead, Carol gets (an entirely separate pool of) credit as soon as she makes the commitment. Her virtuous act isn't curing malaria, it's pledging $5MM to a system that makes things like curing malaria possible.
When the project succeeds, and Carol pays out, Alice keeps all the credit, just as if she'd funded the project through philanthropic grants. Bob mostly just gets rich, with some vague added bonus of being able to tell people he did it via impact investment, rather than fossil fuels or child labor or whatever.
The crime/punishment analogy makes perfect sense, and I don't see need to introduce the idea of fungible fractional shares of some ethical activity.
I should add, Carol either gets a different set of social credit as you describe or (more likely in my opinion) has to be willing to walk away feeling satisfied. I think Carol, if they are a good effective altruist, ought to let Alice get the social acclaim to incentivize future founders.
Maybe you addressed this point and I simply missed it, but otherwise it seems as if you're glossing over what looks to me like as the most difficult problem with this idea: how does an "oracular funder" evaluates the outcome of a project, and assigns it a specific dollar value?
Without a well-designed, aligned, transparent and fair mechanism to produce such evaluations. everything falls apart.
Not necessarily - the evaluation of people working at an anti-homelessness charity of whether or not to do a particular intervention would only have to be as sophisticated as, "Homelessness will probably reduce if we do this, so let's do it". The literature on charities is littered with examples of charities doing stuff that sounds good on paper but doesn't work in practice.
However the oracle needs to identify *how much* homelessness reduced by; merely saying it seems to have gone down 'a lot' (or even "Between 10% and 15%") is insufficient. They also need to either directly assign a utility value to ending a marginal case of homelessness or indirectly do the same by quantifying utility before and after the intervention. And this then raises the problem of whether all homelessness is equally serious; what if the intervention 'cherry picks' the least severe forms of homelessness because they want to make money off the "$2000 per homeless sleeper housed" bounty announced by the oracle?
Isn't retrospective evaluation at least as easy as prospective? If an anti-homelessness charity has done something, we can always evaluate their claim prospectively without taking any measurements of actual outcomes.
You can't go the other way and turn a (truly) prospective evaluation into a retrospective one, though.
Thinking about it, maybe the difference is in how we're viewing this. I was thinking from a "partial derivative" standpoint --- hold the level of thoroughness of the evaluation constant, change the timing. From a "total derivative" standpoint, asking for a more thorough evaluation makes it harder, even if changing the timing makes it somewhat easier (that is, some questions that are literally impossible to answer prospectively might "only" be extremely difficult to answer retrospectively).
These markets probably do require a more thorough evaluation. That is, it's important to identify specifically which one out of five interventions actually succeeded, or to at least narrow it down to maybe something like 50% confidence in two out of the five. Thus, I now think you're more correct. I'll leave my comments here in case others find the thought process helpful.
I think the problem is that you have to prospectively commit to a payout criteria. That criteria must be pre-defined, measurable, legible, etc. And there's a Schrödinger's cat problem here, where the act prospectively creating these payout criteria incentivizes some bad apples to solve the problem in an unsatisfactory but technically-acceptable way.
If this is true, then why is there no scrutiny towards charities today that do not/cannot prove they are allocating resources efficiently? Because otherwise this is an isolated demand for rigor.
The choice is the current system, throwing money into projects that we have little way of knowing if they work, and a different system that forces those involved to actually work out/prove if they're spending money efficiently. And given that investors won't invest until such a time that proper, verifiable criteria are established (and have interest in helping establish these criteria themselves), it's not as if there will be a bunch of money invested and then somebody realizes they can't tell who was successful and the whole thing suddenly collapses.
Under the current system there can be good faith disagreement about how well a charity works. If I think charity A works better than charity B then I donate to A and you to B. There are a number of problems with this system - not least the one you describe where ineffective charities with good PR are overfunded - but at the end of the day we can both be happy with our choice.
Under the proposed system there *cannot* be good faith disagreement, and this is why I am not making an isolated demand for rigour. The oracular funder needs to specify to the dollar how much impact a charity has had, and then defend its decision against financially-motivated backers who disagree with it. Both the requirements of good faith attempts at quantifying impact and the cost of arbitrating disputes are different between the current and prospective system.
I don't think the original post has any discussion about funders helping establish criteria for effectiveness. I'd be very concerned if people with a significant financial stake in X rather than Y being true were invited to offer their opinion on X vs Y; it is a significant point of failure in pharma regulation but it is unavoidable in pharma (because the company owns the data) whereas it is avoidable here
I agree overall that this problem exists, but I think the idea would is this word "oracular." As in, what the oracular funder says, goes. And that's a risk that must be priced into the market. Of course, the oracular funder should, as good practice, pre-declare criteria, but they could refuse to pay out if they perceive something was done in bad faith, even if the outcomes technically fulfill the criteria.
This is absolutely the limiting factor. Evaluations of impact in the social space are incredibly opaque and expensive right now. In Capitalism markets work because there are clear 'market tests' - the person who is supposed to benefit (the consumer) is the one deciding how to value the product/service, and paying for it. Incentives are perfectly aligned for markets to generate social benefits. This is not at all the case in impact markets as outlined here
I think this is supposed to be solved with the word "oracular." As in whatever the oracular funder says, goes. They should pre-declare some criteria, but they could also refuse to pay out for real or petty reasons. This is a risk that the market will need to price in.
A good effective altruist with a reputation to protect (such as Scott Alexander) or equivalent people would play that role.
I don't get the problem with the capitalistic option, if I'm a normal employee at a for profit company or a charity and I get paid the market rate, I still kinda get "the credit" for being a good worker, right? So why can't I get the credit for being a good founder while also getting the market rate money for it?
The incentivizing evil issue kinda broke the beauty of the idea for me... could it be solved somehow with fines? Like say the oracle funder requires that you publicly buy the shares. Then if the project turns out to do harm, they tell you to pay to offset this and you do it because...
1) they'll refuse paying you for other project shares and harm your career as a charity funder?
2) all oracle funders cooperate on 1) and super ruin your career as a charity funder?
3) other entities cooperate to not cooperate with you and be mean to you unless you pay the fine?
4) it turns into a dystopian court system where the oracles can randomly ruin your life?
Super interesting. Re "whoever snaps up the shares first will get most of the surplus", two possible solutions
1) have an IPO window, and if it's oversubscribed, allocate by lottery. This has the advantage of being easy and the disadvantage that you under reward people for spotting the opportunity (though I think that's ok - when it's oversubscribed, there wasn't much value in spotting the opportunity)
2) have an IPO window with a more complex allocation eg if it's oversubscribed by a factor X, everyone gets 1/X of what they ask for. When the UK government sold the Royal Mail to retail investors about 10 years ago, I think everyone who applied got what they wanted unless you applied for over £10k worth in which case you got that amount
I think not letting founders get rich is going to be crucial to public support for these things. Purely my personal feeling, based on working in the third sector in the UK for the last 7 years, including for a funder for the last 5
Fundamentally, for a charitable action, I would think that a founder asking for $1m, but only getting $500k, means that we should not go forward. The project won't be half-as-good project. Most likely, it will just fail?
Why is "whoever snaps up the shares first will get the most surplus" is "unfair" in some way? Isn't that just the way the world works?
I mean, if you buy it up first, you are taking a risk on a project. As events unfold, the price of that share goes up or down. You could have waited to get more information and if the price goes down, then that's your fault. I don't see the unfairness here.
Good question. I don't think I am saying "it is unfair" as I haven't thought about it enough. I am saying "it will be perceived as unfair by enough people to undermine public support for it". This is based on a decade working in policy and politics in the UK and may not apply to public opinion elsewhere.
The comparison that came to mind when I read that the winners would be "fast investors who may not have added any value" was professionals/bots that buy up concert/sports tickets as soon as they go on sale, and then list them on resale websites at a huge markup. This is very unpopular.
I think my argument is: In the UK, enough people will be concerned about anyone making a profit from charitable work that any new funding mechanism will be at risk of being seen as unfair, even if there are good logical arguments why pragmatically it's still right. "Fast investors profiteering without adding value" falls into the category of things that will be seen as unfair. This will undermine the new funding mechanism.
Let me know when you've worked it out - happy to invest my money in impactful ways, but have been around funding long enough to be impatient and sceptical of new/innovative impact models & evaluation frameworks.
Regarding the auctioning off of the tokens: I think this could be done with a Liquidity Bootstrapping Pool [0] [1]. You would have to first sell some tokens personally (over the counter) and then you put the proceeds from that and the remaining tokens into the balancer pool, which then uses an automatic market maker mechanism to sell tokens at a fair market price until a previously set percentage of tokens has been sold.
Optimistically, it feels like a lot of these concerns will ultimately be circumvented by the market. In theory (ideal end-state, as you call it), you don't have to worry about who "should" be allowed to reap financial benefits, for example, because funders and project runners will simply decide for themselves which kind of operation they want.
It seems like a pilot could be run on KickStarter right now, to be honest, and would provide more concrete data for future investors/charities to reference.
Great idea around just using kickstarter as a way to ... kickstart! ... this. This is one of those practice over theory things. We've argued this stuff to death.
Actually thinking about this now, we'd need to only sell this to HNW individuals, which precludes kickstarter and indiegogo, but I bet there's a HNW version selling non-liquid assets.
I originally parsed "We don’t get the “coolness” boost of using crypto" as an obvious joke, but the following paragraphs appeared to suggest it was intended unironically. It's fascinating that bubbles exist even today where "NFT" and "crypto" signal the precise opposite of everything they do in my techy bubble.
Right. When I read that I thought "What about the crypto lameness drag?"
But I suppose if the right people believe it's still cool, it doesn't matter what most people think. I suppose when Crypto.com Arena changes its name (I give it two years max) it will be harder to credibly claim it is still cool.
Does the final oracular funder announce in advance what outcomes they're willing to fund, eg. we'll definitely pay $2,000 per QALY? Otherwise, the funder has a perverse incentive to only pay out in a fraction of cases (whenever they can find a flimsy justification) as by that point the good thing has already been done, so it's in their interest to keep their money as an incentive to further projects. This seems doubly so if these are being done for profit.
These also have the potential to become Utilitarian Indulgences - companies that want to rely on child labour or pollute a few rivers can buy these as a moral carbon offset.
Surprised no one else has drawn a parallel between these markets and the growing carbon offset market. The fundamental challenge of the market seems similar: people want (but aren't obliged) to pay for a good thing to happen, but measurement of the good thing is hard to do.
Absolutely here. The measurement is by itself hard but then pre-committing to a set of hard-to-fake measures adds an additional layer of complexity to the oracular funder.
On the other hand, as I mentioned above, I think while it is important for funders to pre-declare some set of criteria, we don't need to over-insist on it. We can still rely on the funder's good faith that they will follow through, but they have the right to call shenanigans on people who try to game their pre-declared criteria. Funders who do this in bad faith get their initiatives discounted by the market going forward.
I think this is supposed to be solved with the word "oracular." As in whatever the oracular funder says, goes. They should pre-declare some criteria, but they could also refuse to pay out for real or petty reasons. This is a risk that the market will need to price in.
A good effective altruist with a reputation to protect (such as Scott Alexander) or equivalent people would play that role. Fundamentally, that oracular funder would be relied on to care about that reputation.
I think the moment they get a reputation for not paying out, investors will de-value shares targeting those oracular funders' targets and fewer founders will therefore attempt them.
I agree with your summary of the market structure, but don't find it particularly reassuring. Even the most reputable effective altruist surely has less of a reputation to protect than Microsoft or Google or any of the other large corporations funding the carbon credit markets. Yet the gains from these markets have proven extremely hard to quantify and considerable resources have been poured into making them "seem" good. The same problem would seem to plague these impact markets: participants splitting their investment between *impact* and *the appearance of impact*.
Which is all to say: if you're an oracular funding (or Microsoft or Google), please please please fund improved measurement methodologies.
I agree in general but with some ifs and buts. It is definitely not reassuring at all, IF you are thinking about this in a very global sense of building a giant huge successful impact market. I think this approach I've outlined above is probably good enough for Step 1, which is just ACX grants done through an impact market. In that sense, we don't need someone with the reputation of Microsoft or google to serve as oracular funders. Any amount of scale would require actors like that though. One step at a time!
To your point about better measurement methods for carbon credit markets, I think a major constraint on funders will be their ability to measure outcomes. That's why I almost wonder if we're thinking too big. Carbon credit markets, curing malaria in Senegal -- so hard to measure! I wonder if the way to kick off a working impact market is low cost, low impact, but high-frequency projects such as "$3k to do a classical music concert in my town center." Then those get more and more ambitious. Of course, that's gonna require a lot of patience and might not work.
>This post isn’t about the theory. It’s about the annoying implementation details. It may not be very interesting to people who are neither effective altruists nor institution design wonks, sorry.
I actually enjoyed this, for whatever it's worth, despite having no particular interest in EA (too poor) or institutional design. Any chapter of The Chronicles of Moloch is gonna be interesting reading; coordination problems seem like the fundamental difficulty of human history.
Also, now I wanna see a noir story about retrospective assassination funding...perhaps call it "Killer Prophets".
I don't see why we have to go to assassination funding as weird things you could get an impact market to do. (though it's hilarious and dark!)
Impact contract for a concert on your town square? For someone to streak a sporting event? Possibilities are endless as long as there's an oracular funder.
a) you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
All the other questions on how the 'extra' money gets distributed, result from this.
b) publication of project ideas or: how do make sure that ideas are not stolen or (deliberately or unconsciously) copied?
The *founder* has two functions: having a good idea/ designing a project (*invention*) and executing the project (*implementation*). Imagine my collegue has a brilliant, but - in retrospect - very simply idea on how to make projects 'raising the sanity waterline' more efficient. He designs such a project and wants to implement it, but he has only so much experience, and investors are hesitant to invest. CFAR and everybody else interested in 'raising the sanity waterline' read the idea, set up a project based on this, back it up by all their experience and clout in the field, and get the money. Good for the impact, but the inventor will probably know this, and not put out their idea. Again, with *very* specific things, you can have a norm of 'respect that person x wrote that first'. With a lot of great project ideas (which are often some type of improvements of existing things), it will be very difficult or close to impossible to distinguish between what was 'stolen' and what was 'we just thought about that obviously'. (Even more, people will suddently have this great idea when thinking about a project they are planning, and be totally unaware that they actually *read* this in a slightly different context some months ago.)
Solution: you need to make sure, that ideas / project designs are only available to investors but not to other potential founders.
Disadvantage: I have no idea how to make this work without contradicting other intended mechanisms.
Other solution: you need to reward *inventors* for their great project design, even if they're not getting the investors' money for implementation. Don't know yet how to do this in the context of this design.
Question: How does this work in VC? You don't put your business idea out on the www to see if it attracts capital.
c) I understand that you want all the elegance and mechanisms of a real market with lots of options. Looking at the difficulties and disadvantages, there could be intermediate solutions. EG: Founders put out their project to potential final oracular funders (for the moment a limited number of players known beforehand). A funder says: I find the idea to cure malaria in country x with the planned amount of 1 million USD great, if it works, I would be willing to pay 3 million for this. Now, all the projects that have a backing by a funder, go to the potential investors. Those project that find the necessary amount of investment, get funded.
d) The measurement. How do final oracular funders know how well the project worked out? There is *lots* of incentive to distort this in the reports. (When this whole thing grows bigger; hopefully less so in your next rounds of grants.)
Related: Most projects I know of don't fail on the account of the few measurable indicators they commit to. They fail on everything else. You mentioned this, but I think it's underaccounted for in the reflections on the set-up. Maybe EA has long solved this, and I'm unaware.
e) re 2. In the charity/grants/projects worlds I know, funders would take all the credit for *funding* the greatest projects, and organizations/firms/... would take all the credit for *implementing* the greatest project. (In fact for *designing* and *implementing*.) Makes sense to me, because we're talking about two different tasks.
When the EU funds a project, the 'EU funded' is plastered on each infrastructure project built, each tractor bought or each scientific article published. The firm that built the infrastructure will claim 100% of credit for carrying out this magnificent (EU-funded) building or scientists x for doing that magnificient research. For both it's important.
I guess this goes in direction of 2B (without having read Ben H. article). I find the 'sells all the credit' or 'sells part of the credit' is unnessarily complicating things. I think those latter might be unproblematic or even attractive in certain circles (VC/rat.adj./ EA? ...), but I would be worried it might have a chilling effect when selling the method to many other groups of people, including those usually engaged in non-profit projects.
Admitted, I 'cured malaria' sounds better than 'I funded that malaria was cured' – but I think it still sounds pretty good ;). You could partly solve this by selling 'cure malaria (funding)' certificates and when you're talking about this as 'I cured malaria', everybody at the party would know you're the funder, not the one implementing this. Or vice versa.
> you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
see
> Why wouldn't they pay $5M for a project with a guaranteed $5M outcome, rather than gambling $5M on five different projects none of which might work?
Thanks. I don't think that's an answer. I'm not questioning (as the commentator above) that somebody would be willing to do that. I'm asking this in the context of 'we want to achieve maximum impact'.
I think Scott is making an inaccurate simplification when he says that the final funder would pay $5m for $5m of impact. In actuality, the final funder might be hoping to pay somewhat less than $5m for $5m of impact. Or more precisely, the final funder is doing this because the impact market is offering a higher return in terms of impact per dollar than is available without the market.
Under the status quo, we have, e.g., a bunch of possible projects that cost $1m each. A final funder is competent enough at screening projects that they can identify a pool of projects that will produce, say, $2m of "social good" in expectation (but some of them will end up not working out, and the ones that do work out will produce much more than $2m of social good). If the final funder doesn't have enough money to just fund the entire pool, they can randomly sample and still get $2 of social good per $1 invested. This is basically where we are now.
With an impact market, the funder can instead select retroactively the projects that produced at least $5m of social good at a cost of $1m, and "buy the impact" at a cost of, say $2m, achieving a higher social return per dollar while also doubling the money of the investors who funded those projects. But investors who funded less good projects - who didn't do any better at picking projects than the final funder could have done on their own - might not get any money.
Do you think it could ever work for the oracle to purely retroactively award projects? I would think it would require pre-stating some set of criteria, against which people propose initiatives to the market. As such what you described in the last paragraph doesn't seem workable to me.
The whole idea rests on the assumption that the funder isn't able to identify good projects, because they don't have the technical/political expertise to do so. They're effectively paying a commission to an investor for doing this.
More realistic numbers would be something like: charity knows how to generate $500K impact with $1M (thus, they don't invest their money). Professional investor knows how to generate $5M instead. Any sale price of the successful project between $1M and $5M clears the market (i.e., causes the project to happen and generate $4M in "gains from trade").
This is a net gain for the charity, though the problem that remains open is how exactly those $4M are split up. This is a standard issue in economics, with the answer being the person who has the better information, and/or a stronger way to pose an ultimatum.
The impact market itself only comes in to create a wonky time-shifted contract, that allows the two parties to find each other. If the charity already knew a good investor in their cause area, they would just hire them as a consultant or something.
It's not clear to me that "just hire the investors as consultants" is a good substitute for the impact market, even if we wave away the substantial problem of identifying the good investors in advance. For one thing, the investor might be hoping for a substantially higher reward (albeit with correspondingly higher risk) in the impact market compared to what is available as a constant fee. Effectively, part of the good investors' returns comes from the bad investors. (Simplified comparison: charity spends $1m each two projects, of which one ends up producing $2m of impact and the other $0. Versus, investors fund the two projects, charity pays $2m to investors of the impactful project. In both scenarios the same projects have been funded, the charity has paid out the same total amount, but in the second scenario the good investors gain $1m while the bad investors lose $1m)
I tend to agree, that this mechanism replaces 'hiring' sb. to do the job, but also in addition shifts the risk for the project not working out from the funder to the investor (sth. you can't achieve easily by hiring someone for selection of projects).
> The whole idea rests on the assumption that the funder isn't able to identify good projects, because they don't have the technical/political expertise to do so.
I agree with this. The point I raised was not to say, it's not worth it (the method hasn't any benefits), just mention sth. that lacked attention IMO.
> though the problem that remains open is how exactly those $4M are split up. This is a standard issue in economics, with the answer being the person who has the better information, and/or a stronger way to pose an ultimatum.
That's a good point. What's your idea on who, in this game will have the better information or more power to reap the gains?
>a) you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
And you end up spending $0 on projects that don't work. And your $5 million likely incentivizes more than $5 million in private funding for projects.
> you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
Small nitpick but actually, you end up with $6m spend total on a project that only cost $1m to do ;)
I would think that the oracular funder has the discretion here. "Oracular" being the operative word. They think doing X will yield $5m in benefits. If they think there's a better project out there, they would be diverting this $5m elsewhere. We don't question it, because they are "oracular."
From the Funder's perspective, you are choosing between "this $5m benefit to the world never happens" vs. "I spend $5m on a $1m-cost project." The cost is irrelevant to the funder. They believe the outcome is worth $5m to the world, so that's their willingness to pay.
This doesn't seem like a problem to me, if we accept the "oracle" part of "oracular funder."
I am skeptical about this working - but one useful part for prediction markets or similar would be ability to predict that attempt X will fail (or succeed).
This may provide some help against blatant scams and failures.
Though overall, this sounds like an interesting project with many traps, up to SEC appearing and ending fun or maybe even criminal case for someone (is there prediction market for that already? :) ).
On issue Number 9B - my experience of the UK pharma regulatory context is that when there's enough money on the line people can get quite argumentative about exactly how their intervention is assessed. Quite a lot of this is that it is really hard - probably impossible - to say "This intervention saved X lives for sure" because it will always involve more-or-less reasonable assumptions (for example that, counterfactually, everyone wouldn't have spontaneously killed themselves as a protest against non-intervention, for example. Or maybe more reasonably, what the correct statistical approach is to participants in the trial who deblind themselves somehow or cross over trial arms). So you end up saying, "Our best guess is that we saved Y lives" and then defending your decision from criticism from the people with a large financial stake in Y being as high as possible.
The rough cost of the regulators defending one pharmaceutical submission in the UK is £143,000. This is an overestimate of how much the oracle would spend to do exactly the same thing as NICE, because large government agencies are inefficient, but this is also an underestimate of the true cost because the pharma companies do a lot of the work for the regulator; for example the company always funds the literature review and development of an economic model which are close to £100k each. My guess is that maybe the total cost of a full assessment including salaries is probably fairly close to £500,000(ish). Also it is unclear to me who funds the trial to assess effectiveness, but this could be hundreds of millions of pounds if you want to RCT a big intervention (and if you don't want to RCT the intervention you introduce a lot more points to argue about).
Basically, I'm a bit unclear how the 'guard labour' costs of the oracular funder are accounted for. These costs are non-trivial and it probably wouldn't be suitable to use the current EA method of well-meaning people giving it their best shot to assess outcomes even if there are enough well-meaning EAs around to make this possible, because if the project works well there will be a whole bunch of monied interests attacking your decisions.
I don't see any problems with founders getting rich. If that's a persistent problem, more smart people will flock to the charitable sector. Thus driving down excess returns for founders and also causing more charity to happen.
Yes! Founders can only get "too much" money if there aren't enough founders with good ideas competing for investors' money. If you have faith in the market solving all the other issues here, then the market will definitely solve this one for you also.
Absolutely... every participant needs to be incentivized in order to make this market work. I think what's under-discussed so far has been the expected actual market clearing quantity of projects. Without proper incentives, q will be very low imo.
Why try to make philanthropy more efficient? Philanthropy is on net socially detrimental. Donors are at the heart of the culture way (e.g. George Soros). They are not accountable to the people that they are allegedly helping. I will elaborate on my substack in a few days.
I am suggesting that wealthy people should invest in profit-seeking businesses, which are accountable to the people they are supposed to serve. Non-profits are accountable upward, not downward, and hence are always going to be dysfunctional.
There's a Deontological style argument that if you are funding something, you get to be in charge of it, but there is also a Consequentialist style argument that you get better results by being responsive to end users.
Well, when you are running a company there's both options: equity investors get to (theoretically and indirectly) be in charge of the company. Debt investors normally don't get to be in charge. But if a bankruptcy happens, prototypically they turn into equity investors and the previous equity investors are wiped out.
Given that Kling has more publicly available written content than your average ACX commenter, I decided to find whether he'd already elaborated on nonprofits elsewhere, and save everyone some time. Sure enough, he has, although the link has gone stale a couple of times. Here's the Wayback Machine:
I think it's a fairly representative account of the free market perspective, so I encourage people to read it - if you can respond to it, you'll be responding to more than just one commenter.
Thinking on it, I get an argument that nonprofits are dysfunctional because that's the way the incentives point. Yes, some of you claim there exist nonprofits which aren't, but the argument is that it's not enough to point out a functional one (assuming you could); such a beast is an unstable unicorn, fighting uphill. Even if that nonprofit is founded on the noblest of intents, by the most motivated of paladins, the special status of that nonprofit will inexorably lead it to greedy entryists, or even the founders themselves will find themselves spending more and more of their time lobbying for grants from the state or buying up rivals to reduce pesky competition.
Mr. Kling: have I adequately understood your position here?
The argument I see in that essay isn't against nonprofits existing, mind you. It's against giving them special breaks under the law. If Soros wants to found a nonprofit that promotes the health benefits of sticking forks in your eyes, the law probably shouldn't intervene. It shouldn't even intervene if the Eyefork Foundation successfully persuades a million Americans to join the movement and throw huge parties where they merrily fork themselves. Indeed, if enough people find this intuitively wrong, one can expect another nonprofit or two to spring up extolling the risks of eyeforking, and the law ought to stay out of their way as well. Thus, eyeforking will succeed or fail on its own merits.
One way to measure that success is in terms of how many people come around to it. Trouble is, it's arguably better to convert 10,000 people to eyeforking and spend a million doing it, than to spend ten million and get 10,001. A cleaner way (in a free market) is to encourage the nonprofit to, say, sell subscriptions to customers who would like to see more eyeforking in the world (well, sense it, I guess), and let those customers be the judge of whether this nonprofit is effectively and efficiently promoting the cause. The more people like what it's doing, the more will buy subscriptions, or buy other promotional services it offers; if they don't, maybe they'll instead subscribe to a rival eyefork advocacy organization, or even an opponent. And at that point, there's no need to call them nonprofits. And the incentives are pointing the right way, to boot.
This even helps the poor. The goal isn't to help all poor, no matter the cost, as noble as that may sound; resources are necessarily scarce, and if you let your morals compel you to damn the wallet, full speed ahead, you'll just go bankrupt and end up helping no one. You *have* to be efficient.
Funding something retroactively should use no more resources than funding it proactively. You just have a different chance of getting the result you want. (If investors are much smarter than you, you have a better chance with retroactive funding. )
But sure, if you offer huge amounts of retroactive funding for doing silly things, it will waste resources. Anyone moderately capable of recognizing utilitarian good, and funding proportional to that, should come out way ahead.
"Otherwise, I’m not sure this is any worse than eg crypto, which already lets small investors lose all their money quickly."
That is horrifyingly low bar.
From mildly interested reader: it sound highly complex and I am not entirely sure why someone would invest and how risk of not getting paid is for people participating in projects.
Are they paid and they work and investors risk money? How investors can earn extra money to offset risk?
The idea is if you fund a $1million project with $5 million charitable benefit and it succeeds, then the oracle funder pays *you* $5 million (or buys your credit shares for a floor price of $5 million) in retroactive compensation.
The people doing the project would either hang on to some of the shares or take a salary. The whole idea is to transplant the VC model to charity.
Oh, thanks. Now I am getting it.
Presumably funder may be operating also in way that he pay 2$ million for 5$ million benefits and so on.
I am confused as to why the oracle funder would pay out the $5 million. Presumably there is no contract, so it would be out of the oracle funder's good will or based on a promise? Generally, I would not hold my breath for such a promise to be kept with no legal repercussions. Also, couldn't this money be better used for another problem that hasn't been solved? Seems like a waste of $5 million for charitable causes unless the recipient pays this forward to the next problem.
I guess that strategy is that when oracle funder announces it earlier, and people act based on their reputation?
And paying ensures that reputation is not lost and this can be repeated?
Suppose there are 10 projects asking for $1 million, and 9 of them are useless. The funder can't tell which is which. (until afterwards looking at results) But the investors can. So the funder promises to pay $2 to any investor that funds the one project that actually works.
I'm a bit baffled by the extended discussion of how an investor could somehow own the credit for the good work – you point this out briefly early on but then in a later section it seems to be taken as possible that a 100% equity holder could be treated as having done whatever the charitable thing itself is (beyond merely having funded it). Surely that kind of social consideration arises from interactions like interviews and accounts of how the work was actually done, such that nobody would seriously conflate the two. At best I can imagine perhaps someone wants to give the charity founder an award and they contractually have to ask that it goes to the funder instead, but I think the audience at that awards ceremony would likely find it strange and awkward.
The optimistic take is that the EA space is possibly one of the few where you can seriously say "well I bought the credit for this by investing early" and then everyone of your peers agrees to credit you. (Why would they do this? Because they want the market to work.)
If you bought the tokens of a project that then turned out to successfully do something good and valuable, you get, a) profits, and b) some social/moral credit for having correctly predicted that project would succeed, and for caring about helping that problem get solved (although this gets diluted the more profit you made). The market works just fine on these terms - and no real person will ever think that you get the credit for coming up with the successful idea or for executing it well. Anyone who wants to do a similar project and wants advice/someone to run the project will obviously go talk to the founder, not to you.
This concern is nonsensical. You cannot buy third party's awareness of who actually did what, simply by buying shares in the possible financial profits.
The concern is incentivizing the oracular funder, not the investor, since they're the ones who end up owning the credit and need to want to pay for it for this to work. They won't be getting any profits, so there needs to be some other benefit for them.
Charitable foundations whose reason for existing is to fund beneficial projects do not typically need to be incentivized to do the thing they're trying to do anyway, but more effectively. They are already being judged (and donations being made to them) on the basis of whether they are effectively allocating money - and they will get social/moral/effective altruism credit for having *funded* good projects and enabled those projects to happen. They will not get credit in anyone's mind for having *done* those projects.
To me, it seems like the point of doing this is to bring in people/perspectives/information from people who don't already care. The focus on the EA space doesn't fulfill that here.
Agreed here. I think the problem this is trying to solve is that currently oracular funders have poor information. So the proposed solution in this post is a market mechanism that would combine all the information together thereby reducing waste and improving asset allocation. In order to succeed in doing that, the market mechanism must incentivize a wide array of participants. My gut says the proposal around social credit would not attract sufficient number of market participants and would also attract a very narrow/specific segment.
Sorry for being late to the party, but I fail to understand the central argument: why would anyone bother to buy the credit? Even without the premium this would hardly make any common sense.
In „normal“ markets patents, companies or brands are bought with an expectation to gain more profits with these later. It makes sense as an investment.
Products and services are bought as a means to an end. It makes sense as a purchase (or as an investment, too).
Buying the credit after the fact seems to me to make sense neither as an investment nor as a purchase. Just „bragging rights“?!
What am I missing?
I think the same reason any charity funds any project (i.e. some combination of altruism, fuzzies, and bragging rights). The proposal is that, instead of the current system where the charity gives money in advance to fund a project that may or may not work, they give the money afterwards to a project that did work.
Opportunity cost: Why would a charity pay 5 M$ retroactively for a 1 M$ project, giving up at least 4 M$ to fund other projects?
Definition of charity: If a charity is generating profits for private investors, is it still a charity? Or selling of indulgences?
Bragging rights: Even beyond the point of doubtful bragging rights, aren't you diluting the signal of the original contributor? Don't we want to have them the bulk of the reputation in the hopes, that their hopefully good future ideas are funded again?
This market seems to lack a credible buyer (beyond the initial motivated creators of the market).
Why wouldn't they pay $5M for a project with a guaranteed $5M outcome, rather than gambling $5M on five different projects none of which might work?
And it's only a "$1m project" in the sense that a $500 pair of designer shoes whose raw materials cost $20 is a "$20 pair of shoes".
> And it's only a "$1m project" in the sense that a $500 pair of designer shoes whose raw materials cost $20 is a "$20 pair of shoes"
Yeah, that's how *non-profit* works. The project is: costs including salaries + 0 USD. No excess of revenues over expenses. That's what makes it different from business.
And that's part of the reason why most charities don't do much good.
It's part of the reason why people are willing to spend money on charities or work there for a less-than-competitive salary, effectively donating their time.
Please, let's not get into ideological 'market is better than state' or vice versa arguments, I'm tired of ideological arguments on 'market is great' or 'market sucks'. Especially when they come as a side effect of whichever other topic is discussed.
I didn't compare non-profit and profit for effect and didn't intend to do so. Whether you define 'a 1M project' as '1M is what people would pay for it' or '1M is the actual costs of the project' is, well, a matter of definition. But it's worth noting that one is typical for a market logic and the other one is typical for non-profit logic - and that there are two very different groups out there, who function primarily according to one of those (and sometimes despise the other).
I understand the argument, but I fail to understand how a charitable project can be valued higher than its actual cost (similar to the shoe example). It still seems to upend the definition of charity. Some other commenter here asked the same question: How is that value defined?
Is it all just banking on the vanity of billionaires and charity funds? If so, whats the real fundamental difference to today? A billionaire fund can already enjoy the glory of the Good Deed. The only addition would be to funnel away some charity money into private hands (and attempt to slightly shift who gets to bath in the glory, but thats just a technicality and no difference in principle).
> I understand the argument, but I fail to understand how a charitable project can be valued higher than its actual cost (similar to the shoe example).
Lets assume that one may distribute life-saving medicine at cost of 5$ per person, with medicine cost 5$ per unit, with 1 life saved per 10 distributed units, with all overheads doubling cost.
This unrealistically optimistic case would give 200$ per saved life.
I think that we could find plenty of people willing to pay over 200$ per saved life, therefore reaching case where charitable project can be valued higher than its actual cost.
>Is it all just banking on the vanity of billionaires and charity funds? If so, whats the real fundamental difference to today?
The difference is that the charity is not taking on the risk of allocating funds and is not expected to singlehandedly know what projects should be funded for maximum impact. Private invididuals/funds who think they are better at evaluating the charitable impact of a project take on the risk, and the charity only spends money on stuff that works.
Compare to a question like "why bother punish criminals for crimes, when they've already committed the crime and the punishment won't reverse that?" Although the punishment itself doesn't help, the fact that everyone expects crime to be punished decreases crime, and people won't expect that unless you follow through. So punishment "causes" less crime, even though it never directly changes the behavior of the specific crime it's applied to.
In the same way, if you accept the arguments I gave at the beginning for why funding charity through impact certificates is better than the normal way, by buying the certificates (ie rewarding good purchases), you're causing the impact certificate system to exist, and causing more charity. That is, the system only works insofar as everyone expects their charitable projects to be rewarded, and the best way to produce that expectation is to make it true.
To put it another way: suppose a charity credibly promises to buy impact certificates. Lured by that promise, an investor funds a good project and makes it happen. The charity's promise has caused the good project to happen. Then the charity has to decide whether or not to keep its promise. If it breaks the promise, the good thing will already have happened, but nobody will trust the system ever again, and they won't be able to make more good things happen through the same method.
Ok, this clarifies it. Let me try to summarize:
Creating an impact market in the hopes of creating a higher rate of (effective) charity projects than existed before the market.
The tradeoff being, that due to the generation of pure profit, total expenditures rise faster than the total amount of Good done (product of number of additional projects times charity done).
If this is correct, isn't this somewhat antithetical to the idea of EA? More good is done in total, but less efficiently per dollar. A counterargument could be, that the profit generated could be invested into more charitable projects. But private investors could also just build another mansion with it.
Also:
What about gaming the market? I.e. exploiting the market by producing a problem in the first hand, like delivery of faulty or low quality medicine and then get yourself funded to fix the problem with That One Easy Trick.
What about cannibalization of existing charitable efforts by creation of the impact market? Some companies already engage in charitable efforts for example. Now that there is a market, they could be forced by these market forces to capitalize on their charitable efforts. Is this good? Is this exactly what you are looking for?
I can't get around the fact that in the end, somebody has to pay the investors. Does this really solve the problems that charity is trying to solve? Couldn't the market just devolve into an incentive to keep societies, countries, people that need charity in an everlasting state of needing charity?
I don't see the tradeoff. Why do you expect profit to make things worse?
Private profit oriented companies are extremely efficient compared to most alternative forms of organisation we've tried at scale.
Imagine all funders together are willing and able to spend sum M to make the world a better place (to create impact in areas deserving of their charity). Now, without 'intermediaries', you can spend M directly on achieving the impact, the sum available for impact I = M. Enter 'profit', in total of sum P. Now the money you can spend on I (1) (impact) = M (total money available) - P (profit).
So if you add profit, you lower the total amount of money available for achieving impact. I(1) < I (0). That's the tradeoff. The higher the profit P, the lower I(1). (given that the overall amount of money M available through funders is fixed).
The inefficiencies/efficiencies you're talking about relate to how efficiently we spend whatever I or I(1) is available.
In an ideal world, we would be able to spend all of M on I, the impact, and spend this money I in a extremely efficient way. The world is not perfect, it still makes sense to strive for max. efficiency while max. limiting profits.
The tradeoffs come into play, where more of the one cannot be achieved without more of the other.
As usually there are additional side-effects that are not included: eg. would more efficiency raise the interest of additional funders (M(1) > M) or would more profit-orientation deter other funders (M(2)<M).
Right but the size of the effects matter.
Average market net profit is ~5%. I believe there are very few charities that are operating this close to private sector efficiency. There are charities whose efficiency could easily be improved by an order of magnitude or two. This is far more waste than the 5% profit margin typical in for profit companies.
> If this is correct, isn't this somewhat antithetical to the idea of EA? More good is done in total, but less efficiently per dollar.
If selection process is better than alternative selection processes it may be worth it.
For example, lest say that we have Scott with 5 million dollars in funding willing to fund this way, but unwilling to go through manual grant selection.
In such case you either get projects funded this way (with some losses) or nothing, making the first solution more effective.
Or maybe Scott should donate that 5 million to anti-malaria nets.
> If this is correct, isn't this somewhat antithetical to the idea of EA? More good is done in total, but less efficiently per dollar.
I'm pretty sure EA mostly cares about the total amount of good done. It tends to focus on efficiency per dollar because often that's easier than increasing the total number of dollars, but that's not the core goal.
This makes complete sense, but I think it's still quite counter-intuitive to pay for a good that has already been achieved. People want their money to go to creating new good things. The fact that over-all the system might be better with retroactive funding doesn't change the fact that that specific step will feel like an odd way to spend their money.
This is rather unlike punishment for crimes, where many people seem to share a visceral belief that those who commit evil acts should suffer for them.
The 'prize' idea is the closest parallel, but even in typical prizes, while the prize is given after the fact, it's committed before the fact, so at commitment time, there's still no retroactivity.
I think it just seems weird because it's different than what we're used to. That doesn't mean there's anything fundamentally wrong with the idea
No indeed, in fact I really like the idea, I'm trying to work out why my feeling is that as presented it won't get much public interest. Perhaps it's possible to educate people to pay for goods already achieved (and to pay more than they cost) on the basis that it'll bring more good in the future, but it seems like it might be working against the grain. Even if it is, perhaps rationalists control enough charitable money to make it work.
You can look at the state of free software funding. Lots of very cool projects do make money from donations after the fact by fans, but the level of recompense most projects currently receive is far far below commercial equivalent costs, and for this to actually work, there has to be an expectation that the works will be rewarded commesurate with their value.
I'm still confused by the idea of selling equity in a moral action. Outside of Section 2, and 11a and 11c, I think the post makes sense. But the idea of that we can meaningfully transfer social credit for a successful intervention, in whole or in part, from founder & investors to the oracular funder seems totally absurd, to the point that I question whether I really understand other parts.
Say Alice the Microbiologist founds the End-Malaria-in-Senegal project. She sells 90% of the impact shares to Bob the Impact Investor for $900k, and keeps 10% for herself while investing $100k of her own money. The project ultimately succeeds, and Carol the Oracular Funder buys Alice and Bob's shares for $5 million. A week later, Alice, Bob and Carol are attending the same social gathering. People ask each of them what they've been up to.
Alice: "I founded and lead a project that successfully ended malaria in Senegal through impact equity funding. I funded 100k of the project myself, and sold my shares for 500k."
Bob: "I spent $900k to fund 90% of a project that ended malaria in Senegal, then sold my impact shares for a 5x return."
Carol: "I spent $5MM to buy 100% of the impact equity for a project that ended malaria in Senegal."
Is everyone supposed to be 0% impressed with Alice and Bob and 100% impressed with Carol? When I imagine myself among the guests, I notice that my internal moral value function is assigning the most points to Alice, and a nonzero amount to Bob as well. And if I'm the dean of the medical school where all three of them studied, Alice is the only alumna I'm excited to brag about to my social rivals at the next semiannual medical school deans' wilderness retreat.
I also notice that my feelings don't shift much if instead Alice and Bob keep half of their respective stakes after the project succeeds. I can tell myself that Carol should get 50% of the credit, Bob 45%, and Alice 5%, but I can't make myself feel that to be true in a meaningful way. Maybe I could, if I'd grown up in a world where impact markets already existed, but even then I'm skeptical. And I think Carol would realize this too, such that buying the credit for the project wouldn't be worth $5MM to her.
OTOH, if we insert another event at the beginning of the story, where Carol publicly precommits to paying $5MM to whoever can end malaria in Senegal, then everything makes sense again. In this version, Carol doesn't buy any credit from Alice. Instead, Carol gets (an entirely separate pool of) credit as soon as she makes the commitment. Her virtuous act isn't curing malaria, it's pledging $5MM to a system that makes things like curing malaria possible.
When the project succeeds, and Carol pays out, Alice keeps all the credit, just as if she'd funded the project through philanthropic grants. Bob mostly just gets rich, with some vague added bonus of being able to tell people he did it via impact investment, rather than fossil fuels or child labor or whatever.
The crime/punishment analogy makes perfect sense, and I don't see need to introduce the idea of fungible fractional shares of some ethical activity.
This is a great summary thank you. I think your 2nd to last paragraph describes the necessary outcome:
Alice gets social credit as the do-er
Bob gets rich as the investor
Carol gets satisfaction as a committed effective altruist
There's no need to play games around distributing social credit post-hoc (as if we could anyway).
I should add, Carol either gets a different set of social credit as you describe or (more likely in my opinion) has to be willing to walk away feeling satisfied. I think Carol, if they are a good effective altruist, ought to let Alice get the social acclaim to incentivize future founders.
Maybe you addressed this point and I simply missed it, but otherwise it seems as if you're glossing over what looks to me like as the most difficult problem with this idea: how does an "oracular funder" evaluates the outcome of a project, and assigns it a specific dollar value?
Without a well-designed, aligned, transparent and fair mechanism to produce such evaluations. everything falls apart.
Well, that retroactive mechanism just has to be better / easier than our current mechanisms that require funders to predict the future.
Not necessarily - the evaluation of people working at an anti-homelessness charity of whether or not to do a particular intervention would only have to be as sophisticated as, "Homelessness will probably reduce if we do this, so let's do it". The literature on charities is littered with examples of charities doing stuff that sounds good on paper but doesn't work in practice.
However the oracle needs to identify *how much* homelessness reduced by; merely saying it seems to have gone down 'a lot' (or even "Between 10% and 15%") is insufficient. They also need to either directly assign a utility value to ending a marginal case of homelessness or indirectly do the same by quantifying utility before and after the intervention. And this then raises the problem of whether all homelessness is equally serious; what if the intervention 'cherry picks' the least severe forms of homelessness because they want to make money off the "$2000 per homeless sleeper housed" bounty announced by the oracle?
So better, probably. Easier - no way!
Isn't retrospective evaluation at least as easy as prospective? If an anti-homelessness charity has done something, we can always evaluate their claim prospectively without taking any measurements of actual outcomes.
You can't go the other way and turn a (truly) prospective evaluation into a retrospective one, though.
Thinking about it, maybe the difference is in how we're viewing this. I was thinking from a "partial derivative" standpoint --- hold the level of thoroughness of the evaluation constant, change the timing. From a "total derivative" standpoint, asking for a more thorough evaluation makes it harder, even if changing the timing makes it somewhat easier (that is, some questions that are literally impossible to answer prospectively might "only" be extremely difficult to answer retrospectively).
These markets probably do require a more thorough evaluation. That is, it's important to identify specifically which one out of five interventions actually succeeded, or to at least narrow it down to maybe something like 50% confidence in two out of the five. Thus, I now think you're more correct. I'll leave my comments here in case others find the thought process helpful.
I think the problem is that you have to prospectively commit to a payout criteria. That criteria must be pre-defined, measurable, legible, etc. And there's a Schrödinger's cat problem here, where the act prospectively creating these payout criteria incentivizes some bad apples to solve the problem in an unsatisfactory but technically-acceptable way.
If this is true, then why is there no scrutiny towards charities today that do not/cannot prove they are allocating resources efficiently? Because otherwise this is an isolated demand for rigor.
The choice is the current system, throwing money into projects that we have little way of knowing if they work, and a different system that forces those involved to actually work out/prove if they're spending money efficiently. And given that investors won't invest until such a time that proper, verifiable criteria are established (and have interest in helping establish these criteria themselves), it's not as if there will be a bunch of money invested and then somebody realizes they can't tell who was successful and the whole thing suddenly collapses.
Under the current system there can be good faith disagreement about how well a charity works. If I think charity A works better than charity B then I donate to A and you to B. There are a number of problems with this system - not least the one you describe where ineffective charities with good PR are overfunded - but at the end of the day we can both be happy with our choice.
Under the proposed system there *cannot* be good faith disagreement, and this is why I am not making an isolated demand for rigour. The oracular funder needs to specify to the dollar how much impact a charity has had, and then defend its decision against financially-motivated backers who disagree with it. Both the requirements of good faith attempts at quantifying impact and the cost of arbitrating disputes are different between the current and prospective system.
I don't think the original post has any discussion about funders helping establish criteria for effectiveness. I'd be very concerned if people with a significant financial stake in X rather than Y being true were invited to offer their opinion on X vs Y; it is a significant point of failure in pharma regulation but it is unavoidable in pharma (because the company owns the data) whereas it is avoidable here
I agree overall that this problem exists, but I think the idea would is this word "oracular." As in, what the oracular funder says, goes. And that's a risk that must be priced into the market. Of course, the oracular funder should, as good practice, pre-declare criteria, but they could refuse to pay out if they perceive something was done in bad faith, even if the outcomes technically fulfill the criteria.
You do seem to have missed it- this is discussed in section 9 of the post.
It's discussed yes, but not with recognition of the challenge it represents. It's the single biggest problem for this idea
This is absolutely the limiting factor. Evaluations of impact in the social space are incredibly opaque and expensive right now. In Capitalism markets work because there are clear 'market tests' - the person who is supposed to benefit (the consumer) is the one deciding how to value the product/service, and paying for it. Incentives are perfectly aligned for markets to generate social benefits. This is not at all the case in impact markets as outlined here
I think this is supposed to be solved with the word "oracular." As in whatever the oracular funder says, goes. They should pre-declare some criteria, but they could also refuse to pay out for real or petty reasons. This is a risk that the market will need to price in.
A good effective altruist with a reputation to protect (such as Scott Alexander) or equivalent people would play that role.
I don't get the problem with the capitalistic option, if I'm a normal employee at a for profit company or a charity and I get paid the market rate, I still kinda get "the credit" for being a good worker, right? So why can't I get the credit for being a good founder while also getting the market rate money for it?
The incentivizing evil issue kinda broke the beauty of the idea for me... could it be solved somehow with fines? Like say the oracle funder requires that you publicly buy the shares. Then if the project turns out to do harm, they tell you to pay to offset this and you do it because...
1) they'll refuse paying you for other project shares and harm your career as a charity funder?
2) all oracle funders cooperate on 1) and super ruin your career as a charity funder?
3) other entities cooperate to not cooperate with you and be mean to you unless you pay the fine?
4) it turns into a dystopian court system where the oracles can randomly ruin your life?
idk but something like that
Super interesting. Re "whoever snaps up the shares first will get most of the surplus", two possible solutions
1) have an IPO window, and if it's oversubscribed, allocate by lottery. This has the advantage of being easy and the disadvantage that you under reward people for spotting the opportunity (though I think that's ok - when it's oversubscribed, there wasn't much value in spotting the opportunity)
2) have an IPO window with a more complex allocation eg if it's oversubscribed by a factor X, everyone gets 1/X of what they ask for. When the UK government sold the Royal Mail to retail investors about 10 years ago, I think everyone who applied got what they wanted unless you applied for over £10k worth in which case you got that amount
Happy to expand on either of these if helpful
Both of these options sound way inferior compared to an auction.
Especially the Royal Mail example.
I think not letting founders get rich is going to be crucial to public support for these things. Purely my personal feeling, based on working in the third sector in the UK for the last 7 years, including for a funder for the last 5
Is an auction incompatible with a kickstarter-style approach?
If I sell 1 million shares for a project needing $1m (i.e. $1/share), and I only get $0.50/share, netting $500k, then the project shouldn't happen.
Fundamentally, for a charitable action, I would think that a founder asking for $1m, but only getting $500k, means that we should not go forward. The project won't be half-as-good project. Most likely, it will just fail?
Why is "whoever snaps up the shares first will get the most surplus" is "unfair" in some way? Isn't that just the way the world works?
I mean, if you buy it up first, you are taking a risk on a project. As events unfold, the price of that share goes up or down. You could have waited to get more information and if the price goes down, then that's your fault. I don't see the unfairness here.
Good question. I don't think I am saying "it is unfair" as I haven't thought about it enough. I am saying "it will be perceived as unfair by enough people to undermine public support for it". This is based on a decade working in policy and politics in the UK and may not apply to public opinion elsewhere.
The comparison that came to mind when I read that the winners would be "fast investors who may not have added any value" was professionals/bots that buy up concert/sports tickets as soon as they go on sale, and then list them on resale websites at a huge markup. This is very unpopular.
I think my argument is: In the UK, enough people will be concerned about anyone making a profit from charitable work that any new funding mechanism will be at risk of being seen as unfair, even if there are good logical arguments why pragmatically it's still right. "Fast investors profiteering without adding value" falls into the category of things that will be seen as unfair. This will undermine the new funding mechanism.
Let me know when you've worked it out - happy to invest my money in impactful ways, but have been around funding long enough to be impatient and sceptical of new/innovative impact models & evaluation frameworks.
Regarding the auctioning off of the tokens: I think this could be done with a Liquidity Bootstrapping Pool [0] [1]. You would have to first sell some tokens personally (over the counter) and then you put the proceeds from that and the remaining tokens into the balancer pool, which then uses an automatic market maker mechanism to sell tokens at a fair market price until a previously set percentage of tokens has been sold.
[0]: https://docs.balancer.fi/products/balancer-pools/liquidity-bootstrapping-pools-lbps
[1]: https://docs.alchemist.wtf/copper/lbps/what-is-a-liquidity-bootstrapping-pool
Why not just use an auction?
Like https://en.wikipedia.org/wiki/Multiunit_auction#Uniform_price_auction
They are used in practice already.
To me, this seems too exotic and hard-to-understand.
Optimistically, it feels like a lot of these concerns will ultimately be circumvented by the market. In theory (ideal end-state, as you call it), you don't have to worry about who "should" be allowed to reap financial benefits, for example, because funders and project runners will simply decide for themselves which kind of operation they want.
It seems like a pilot could be run on KickStarter right now, to be honest, and would provide more concrete data for future investors/charities to reference.
Great idea around just using kickstarter as a way to ... kickstart! ... this. This is one of those practice over theory things. We've argued this stuff to death.
(and if Kickstarter has silly rules preventing this, just use indiegogo)
Actually thinking about this now, we'd need to only sell this to HNW individuals, which precludes kickstarter and indiegogo, but I bet there's a HNW version selling non-liquid assets.
I originally parsed "We don’t get the “coolness” boost of using crypto" as an obvious joke, but the following paragraphs appeared to suggest it was intended unironically. It's fascinating that bubbles exist even today where "NFT" and "crypto" signal the precise opposite of everything they do in my techy bubble.
Plenty of EA is currently funded by crypto billions, so it's inevitable that this bubble is abous as positive about it as any.
Right. When I read that I thought "What about the crypto lameness drag?"
But I suppose if the right people believe it's still cool, it doesn't matter what most people think. I suppose when Crypto.com Arena changes its name (I give it two years max) it will be harder to credibly claim it is still cool.
Does the final oracular funder announce in advance what outcomes they're willing to fund, eg. we'll definitely pay $2,000 per QALY? Otherwise, the funder has a perverse incentive to only pay out in a fraction of cases (whenever they can find a flimsy justification) as by that point the good thing has already been done, so it's in their interest to keep their money as an incentive to further projects. This seems doubly so if these are being done for profit.
These also have the potential to become Utilitarian Indulgences - companies that want to rely on child labour or pollute a few rivers can buy these as a moral carbon offset.
Surprised no one else has drawn a parallel between these markets and the growing carbon offset market. The fundamental challenge of the market seems similar: people want (but aren't obliged) to pay for a good thing to happen, but measurement of the good thing is hard to do.
Absolutely here. The measurement is by itself hard but then pre-committing to a set of hard-to-fake measures adds an additional layer of complexity to the oracular funder.
On the other hand, as I mentioned above, I think while it is important for funders to pre-declare some set of criteria, we don't need to over-insist on it. We can still rely on the funder's good faith that they will follow through, but they have the right to call shenanigans on people who try to game their pre-declared criteria. Funders who do this in bad faith get their initiatives discounted by the market going forward.
I think this is supposed to be solved with the word "oracular." As in whatever the oracular funder says, goes. They should pre-declare some criteria, but they could also refuse to pay out for real or petty reasons. This is a risk that the market will need to price in.
A good effective altruist with a reputation to protect (such as Scott Alexander) or equivalent people would play that role. Fundamentally, that oracular funder would be relied on to care about that reputation.
I think the moment they get a reputation for not paying out, investors will de-value shares targeting those oracular funders' targets and fewer founders will therefore attempt them.
I agree with your summary of the market structure, but don't find it particularly reassuring. Even the most reputable effective altruist surely has less of a reputation to protect than Microsoft or Google or any of the other large corporations funding the carbon credit markets. Yet the gains from these markets have proven extremely hard to quantify and considerable resources have been poured into making them "seem" good. The same problem would seem to plague these impact markets: participants splitting their investment between *impact* and *the appearance of impact*.
Which is all to say: if you're an oracular funding (or Microsoft or Google), please please please fund improved measurement methodologies.
I agree in general but with some ifs and buts. It is definitely not reassuring at all, IF you are thinking about this in a very global sense of building a giant huge successful impact market. I think this approach I've outlined above is probably good enough for Step 1, which is just ACX grants done through an impact market. In that sense, we don't need someone with the reputation of Microsoft or google to serve as oracular funders. Any amount of scale would require actors like that though. One step at a time!
To your point about better measurement methods for carbon credit markets, I think a major constraint on funders will be their ability to measure outcomes. That's why I almost wonder if we're thinking too big. Carbon credit markets, curing malaria in Senegal -- so hard to measure! I wonder if the way to kick off a working impact market is low cost, low impact, but high-frequency projects such as "$3k to do a classical music concert in my town center." Then those get more and more ambitious. Of course, that's gonna require a lot of patience and might not work.
>This post isn’t about the theory. It’s about the annoying implementation details. It may not be very interesting to people who are neither effective altruists nor institution design wonks, sorry.
I actually enjoyed this, for whatever it's worth, despite having no particular interest in EA (too poor) or institutional design. Any chapter of The Chronicles of Moloch is gonna be interesting reading; coordination problems seem like the fundamental difficulty of human history.
Also, now I wanna see a noir story about retrospective assassination funding...perhaps call it "Killer Prophets".
"Wanted: Dead. $5000 Reward. No Questions Will Be Asked" is not a new idea.
I don't see why we have to go to assassination funding as weird things you could get an impact market to do. (though it's hilarious and dark!)
Impact contract for a concert on your town square? For someone to streak a sporting event? Possibilities are endless as long as there's an oracular funder.
Interesting piece. Quick thoughts on this:
a) you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
All the other questions on how the 'extra' money gets distributed, result from this.
b) publication of project ideas or: how do make sure that ideas are not stolen or (deliberately or unconsciously) copied?
The *founder* has two functions: having a good idea/ designing a project (*invention*) and executing the project (*implementation*). Imagine my collegue has a brilliant, but - in retrospect - very simply idea on how to make projects 'raising the sanity waterline' more efficient. He designs such a project and wants to implement it, but he has only so much experience, and investors are hesitant to invest. CFAR and everybody else interested in 'raising the sanity waterline' read the idea, set up a project based on this, back it up by all their experience and clout in the field, and get the money. Good for the impact, but the inventor will probably know this, and not put out their idea. Again, with *very* specific things, you can have a norm of 'respect that person x wrote that first'. With a lot of great project ideas (which are often some type of improvements of existing things), it will be very difficult or close to impossible to distinguish between what was 'stolen' and what was 'we just thought about that obviously'. (Even more, people will suddently have this great idea when thinking about a project they are planning, and be totally unaware that they actually *read* this in a slightly different context some months ago.)
Solution: you need to make sure, that ideas / project designs are only available to investors but not to other potential founders.
Disadvantage: I have no idea how to make this work without contradicting other intended mechanisms.
Other solution: you need to reward *inventors* for their great project design, even if they're not getting the investors' money for implementation. Don't know yet how to do this in the context of this design.
Question: How does this work in VC? You don't put your business idea out on the www to see if it attracts capital.
c) I understand that you want all the elegance and mechanisms of a real market with lots of options. Looking at the difficulties and disadvantages, there could be intermediate solutions. EG: Founders put out their project to potential final oracular funders (for the moment a limited number of players known beforehand). A funder says: I find the idea to cure malaria in country x with the planned amount of 1 million USD great, if it works, I would be willing to pay 3 million for this. Now, all the projects that have a backing by a funder, go to the potential investors. Those project that find the necessary amount of investment, get funded.
d) The measurement. How do final oracular funders know how well the project worked out? There is *lots* of incentive to distort this in the reports. (When this whole thing grows bigger; hopefully less so in your next rounds of grants.)
Related: Most projects I know of don't fail on the account of the few measurable indicators they commit to. They fail on everything else. You mentioned this, but I think it's underaccounted for in the reflections on the set-up. Maybe EA has long solved this, and I'm unaware.
e) re 2. In the charity/grants/projects worlds I know, funders would take all the credit for *funding* the greatest projects, and organizations/firms/... would take all the credit for *implementing* the greatest project. (In fact for *designing* and *implementing*.) Makes sense to me, because we're talking about two different tasks.
When the EU funds a project, the 'EU funded' is plastered on each infrastructure project built, each tractor bought or each scientific article published. The firm that built the infrastructure will claim 100% of credit for carrying out this magnificent (EU-funded) building or scientists x for doing that magnificient research. For both it's important.
I guess this goes in direction of 2B (without having read Ben H. article). I find the 'sells all the credit' or 'sells part of the credit' is unnessarily complicating things. I think those latter might be unproblematic or even attractive in certain circles (VC/rat.adj./ EA? ...), but I would be worried it might have a chilling effect when selling the method to many other groups of people, including those usually engaged in non-profit projects.
Admitted, I 'cured malaria' sounds better than 'I funded that malaria was cured' – but I think it still sounds pretty good ;). You could partly solve this by selling 'cure malaria (funding)' certificates and when you're talking about this as 'I cured malaria', everybody at the party would know you're the funder, not the one implementing this. Or vice versa.
> you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
see
> Why wouldn't they pay $5M for a project with a guaranteed $5M outcome, rather than gambling $5M on five different projects none of which might work?
https://astralcodexten.substack.com/p/impact-markets-the-annoying-details/comment/7752564
Thanks. I don't think that's an answer. I'm not questioning (as the commentator above) that somebody would be willing to do that. I'm asking this in the context of 'we want to achieve maximum impact'.
I think Scott is making an inaccurate simplification when he says that the final funder would pay $5m for $5m of impact. In actuality, the final funder might be hoping to pay somewhat less than $5m for $5m of impact. Or more precisely, the final funder is doing this because the impact market is offering a higher return in terms of impact per dollar than is available without the market.
Under the status quo, we have, e.g., a bunch of possible projects that cost $1m each. A final funder is competent enough at screening projects that they can identify a pool of projects that will produce, say, $2m of "social good" in expectation (but some of them will end up not working out, and the ones that do work out will produce much more than $2m of social good). If the final funder doesn't have enough money to just fund the entire pool, they can randomly sample and still get $2 of social good per $1 invested. This is basically where we are now.
With an impact market, the funder can instead select retroactively the projects that produced at least $5m of social good at a cost of $1m, and "buy the impact" at a cost of, say $2m, achieving a higher social return per dollar while also doubling the money of the investors who funded those projects. But investors who funded less good projects - who didn't do any better at picking projects than the final funder could have done on their own - might not get any money.
Do you think it could ever work for the oracle to purely retroactively award projects? I would think it would require pre-stating some set of criteria, against which people propose initiatives to the market. As such what you described in the last paragraph doesn't seem workable to me.
The whole idea rests on the assumption that the funder isn't able to identify good projects, because they don't have the technical/political expertise to do so. They're effectively paying a commission to an investor for doing this.
More realistic numbers would be something like: charity knows how to generate $500K impact with $1M (thus, they don't invest their money). Professional investor knows how to generate $5M instead. Any sale price of the successful project between $1M and $5M clears the market (i.e., causes the project to happen and generate $4M in "gains from trade").
This is a net gain for the charity, though the problem that remains open is how exactly those $4M are split up. This is a standard issue in economics, with the answer being the person who has the better information, and/or a stronger way to pose an ultimatum.
The impact market itself only comes in to create a wonky time-shifted contract, that allows the two parties to find each other. If the charity already knew a good investor in their cause area, they would just hire them as a consultant or something.
It's not clear to me that "just hire the investors as consultants" is a good substitute for the impact market, even if we wave away the substantial problem of identifying the good investors in advance. For one thing, the investor might be hoping for a substantially higher reward (albeit with correspondingly higher risk) in the impact market compared to what is available as a constant fee. Effectively, part of the good investors' returns comes from the bad investors. (Simplified comparison: charity spends $1m each two projects, of which one ends up producing $2m of impact and the other $0. Versus, investors fund the two projects, charity pays $2m to investors of the impactful project. In both scenarios the same projects have been funded, the charity has paid out the same total amount, but in the second scenario the good investors gain $1m while the bad investors lose $1m)
I tend to agree, that this mechanism replaces 'hiring' sb. to do the job, but also in addition shifts the risk for the project not working out from the funder to the investor (sth. you can't achieve easily by hiring someone for selection of projects).
> The whole idea rests on the assumption that the funder isn't able to identify good projects, because they don't have the technical/political expertise to do so.
I agree with this. The point I raised was not to say, it's not worth it (the method hasn't any benefits), just mention sth. that lacked attention IMO.
> though the problem that remains open is how exactly those $4M are split up. This is a standard issue in economics, with the answer being the person who has the better information, and/or a stronger way to pose an ultimatum.
That's a good point. What's your idea on who, in this game will have the better information or more power to reap the gains?
>a) you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
And you end up spending $0 on projects that don't work. And your $5 million likely incentivizes more than $5 million in private funding for projects.
> your $5 million likely incentivizes more than $5 million in private funding for projects.
If that's the case, then the issue I mentioned stops being an issue.
I'm not sure though, this will happen - and if it happens, it might lead to other problems in the medium-term.
> you end up with 5 million spend on a project that costs 1 million. Given that money for charitable work & great EA projects is limited, why is this not a major disadvantage?
Small nitpick but actually, you end up with $6m spend total on a project that only cost $1m to do ;)
I would think that the oracular funder has the discretion here. "Oracular" being the operative word. They think doing X will yield $5m in benefits. If they think there's a better project out there, they would be diverting this $5m elsewhere. We don't question it, because they are "oracular."
From the Funder's perspective, you are choosing between "this $5m benefit to the world never happens" vs. "I spend $5m on a $1m-cost project." The cost is irrelevant to the funder. They believe the outcome is worth $5m to the world, so that's their willingness to pay.
This doesn't seem like a problem to me, if we accept the "oracle" part of "oracular funder."
I am skeptical about this working - but one useful part for prediction markets or similar would be ability to predict that attempt X will fail (or succeed).
This may provide some help against blatant scams and failures.
Though overall, this sounds like an interesting project with many traps, up to SEC appearing and ending fun or maybe even criminal case for someone (is there prediction market for that already? :) ).
Oracle's measurement and legal issues are definitely up there in my list of open questions for this.
On issue Number 9B - my experience of the UK pharma regulatory context is that when there's enough money on the line people can get quite argumentative about exactly how their intervention is assessed. Quite a lot of this is that it is really hard - probably impossible - to say "This intervention saved X lives for sure" because it will always involve more-or-less reasonable assumptions (for example that, counterfactually, everyone wouldn't have spontaneously killed themselves as a protest against non-intervention, for example. Or maybe more reasonably, what the correct statistical approach is to participants in the trial who deblind themselves somehow or cross over trial arms). So you end up saying, "Our best guess is that we saved Y lives" and then defending your decision from criticism from the people with a large financial stake in Y being as high as possible.
The rough cost of the regulators defending one pharmaceutical submission in the UK is £143,000. This is an overestimate of how much the oracle would spend to do exactly the same thing as NICE, because large government agencies are inefficient, but this is also an underestimate of the true cost because the pharma companies do a lot of the work for the regulator; for example the company always funds the literature review and development of an economic model which are close to £100k each. My guess is that maybe the total cost of a full assessment including salaries is probably fairly close to £500,000(ish). Also it is unclear to me who funds the trial to assess effectiveness, but this could be hundreds of millions of pounds if you want to RCT a big intervention (and if you don't want to RCT the intervention you introduce a lot more points to argue about).
Basically, I'm a bit unclear how the 'guard labour' costs of the oracular funder are accounted for. These costs are non-trivial and it probably wouldn't be suitable to use the current EA method of well-meaning people giving it their best shot to assess outcomes even if there are enough well-meaning EAs around to make this possible, because if the project works well there will be a whole bunch of monied interests attacking your decisions.
Yes, Scott seems to be significantly under-rating the difficulty of figuring out what worked (to what extent), and what didn't.
Then on top of that, imagine the lobbying that goes into getting the oracular funder to agree to pay out to an effort that was not truly worthy.
I don't see any problems with founders getting rich. If that's a persistent problem, more smart people will flock to the charitable sector. Thus driving down excess returns for founders and also causing more charity to happen.
Yes! Founders can only get "too much" money if there aren't enough founders with good ideas competing for investors' money. If you have faith in the market solving all the other issues here, then the market will definitely solve this one for you also.
Absolutely... every participant needs to be incentivized in order to make this market work. I think what's under-discussed so far has been the expected actual market clearing quantity of projects. Without proper incentives, q will be very low imo.
About investors losing all their money:
I would imagine people mind want to buy index fund equivalents instead of putting all their eggs in one basket?
This is more like private equity than the share market.
Why try to make philanthropy more efficient? Philanthropy is on net socially detrimental. Donors are at the heart of the culture way (e.g. George Soros). They are not accountable to the people that they are allegedly helping. I will elaborate on my substack in a few days.
Effective Altruism as a movement is really interested in making charity more effective.
Why would funders need to be accountable to anyone? It's their money.
And how would that accountability even look like? Are you suggesting something to do with democratically elected representatives, or something saner?
I am suggesting that wealthy people should invest in profit-seeking businesses, which are accountable to the people they are supposed to serve. Non-profits are accountable upward, not downward, and hence are always going to be dysfunctional.
George Soros' charity being accountable to him seems fine by me?
I get that most charities have terrible goals. Eg most non-profits in the US are religious institutions.
There's a Deontological style argument that if you are funding something, you get to be in charge of it, but there is also a Consequentialist style argument that you get better results by being responsive to end users.
Agreed.
Well, when you are running a company there's both options: equity investors get to (theoretically and indirectly) be in charge of the company. Debt investors normally don't get to be in charge. But if a bankruptcy happens, prototypically they turn into equity investors and the previous equity investors are wiped out.
> hence are always going to be dysfunctional
That is trivially false.
The issue is that if accountability = money, there is no way to be accountable to the poor in a purely capitalist system.
Given that Kling has more publicly available written content than your average ACX commenter, I decided to find whether he'd already elaborated on nonprofits elsewhere, and save everyone some time. Sure enough, he has, although the link has gone stale a couple of times. Here's the Wayback Machine:
https://web.archive.org/web/20150930102402/http://www.aei.org/publication/privilege-nonprofits/
I think it's a fairly representative account of the free market perspective, so I encourage people to read it - if you can respond to it, you'll be responding to more than just one commenter.
Thinking on it, I get an argument that nonprofits are dysfunctional because that's the way the incentives point. Yes, some of you claim there exist nonprofits which aren't, but the argument is that it's not enough to point out a functional one (assuming you could); such a beast is an unstable unicorn, fighting uphill. Even if that nonprofit is founded on the noblest of intents, by the most motivated of paladins, the special status of that nonprofit will inexorably lead it to greedy entryists, or even the founders themselves will find themselves spending more and more of their time lobbying for grants from the state or buying up rivals to reduce pesky competition.
Mr. Kling: have I adequately understood your position here?
The argument I see in that essay isn't against nonprofits existing, mind you. It's against giving them special breaks under the law. If Soros wants to found a nonprofit that promotes the health benefits of sticking forks in your eyes, the law probably shouldn't intervene. It shouldn't even intervene if the Eyefork Foundation successfully persuades a million Americans to join the movement and throw huge parties where they merrily fork themselves. Indeed, if enough people find this intuitively wrong, one can expect another nonprofit or two to spring up extolling the risks of eyeforking, and the law ought to stay out of their way as well. Thus, eyeforking will succeed or fail on its own merits.
One way to measure that success is in terms of how many people come around to it. Trouble is, it's arguably better to convert 10,000 people to eyeforking and spend a million doing it, than to spend ten million and get 10,001. A cleaner way (in a free market) is to encourage the nonprofit to, say, sell subscriptions to customers who would like to see more eyeforking in the world (well, sense it, I guess), and let those customers be the judge of whether this nonprofit is effectively and efficiently promoting the cause. The more people like what it's doing, the more will buy subscriptions, or buy other promotional services it offers; if they don't, maybe they'll instead subscribe to a rival eyefork advocacy organization, or even an opponent. And at that point, there's no need to call them nonprofits. And the incentives are pointing the right way, to boot.
This even helps the poor. The goal isn't to help all poor, no matter the cost, as noble as that may sound; resources are necessarily scarce, and if you let your morals compel you to damn the wallet, full speed ahead, you'll just go bankrupt and end up helping no one. You *have* to be efficient.