Here is his list with our responses embedded.
Hopefully this can lead to a healthy discussion…hopefully…
It’s good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
Off to a great start! It IS good and virtuous to be beneficent and want to help others. We will save the example of tithing for later though.It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.
It is considerate to want to help others effectively, though to be good and virtuous you MUST help others effectively, i.e., more rather than less
We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
We have limited potential to do some good in the face of severe global problems, including but not limited to those suggested.
In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
It is necessary to make deliberate, informed efforts to act effectively to make a positive difference (rather than a negative difference) compared to doing nothing at all.
In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)
It is not so easy to find interventions that we can be reasonably confident are very positive in expectation. So not easy, in fact, that one might be better off making a local contribution rather than spending time and resources. (And I know you’ve addressed this, but this premise is precisely from where that directive originates)
Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
Any such research is always embedded in a constellation of beliefs, methods, and goals. We would like to see those doing such research utilize the following framework.
Let’s take global poverty:
Imagine you have been presented with 100 opportunities that claim to address global poverty:
First, define what counts as addressing global poverty. Perhaps half of the opportunities do not fit your definition, so there are 50 remaining.
Second, define who should be addressing global poverty. Maybe you eliminate 40 more opportunities because the person leading the effort doesnt fit your profile.
Third, define how one should score the impact on global poverty, and rank the remaining 10 opportunities accordingly.
Make your definitions public, and provide consistent updates on impact.
NOTE: that the goal here is to evaluate YOU as decision maker, and NOT the specific interventions. We need more accountability at the investor/donor level.So it’s good and virtuous to use quantitatively tools and evidence wisely.
QEDGiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
We fundamentally disagree on where accountability lies. If a project or intervention fails, accountability should be directed at the donor/investor level, not to the specific organization. (I have a feeling, we may have found our point of divergence!)So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
If GiveWell is willing to take responsibility for the impacts of its evaluations, then I suppose an investor/donor can use GW to manage some of the risks associated with being held accountable. As far as we know, GiveWell has no mechanism in place to be held responsible or enact any remedy, but would love to be wrong.There’s no good reason to think that GiveWell’s top charities are net harmful.¹
If GiveWell is acting as a fiduciary without associated responsibilities, accountability, and remedy, then STRUCTURALLY their efforts are harmful.But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
Absolutely. In fact, if someone has more than $10m in networth, then they are STRUCTURALLY harming the world.Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
Given structural concerns, there are ways to be hands off, in the sense of automating a process. For example, our model is the following:
Once you have earned $10m, then any more earnings should be distributed in locally concentric circles. That, if you earn another $50k, then you walk to your next door neighbor and if that neighbor has not earned $10m, then you give them that $50k and continue to do so with any future earnings UNTIL that neighbor has earned $10m (their earnings plus your gifts). Then you go to the next house, and the next house, and the next house, as do all those neighbors who have achieved the same.Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.
cf, response to 12Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
The risk is NOT held in this case by the person making impact, but by the people who are receiving the aid. And it is not fair or reasonable for the person making the donation to accept the risk when they aren’t experiencing the negative effects.
Perhaps if the donors wants to be on the hook for a direct donation to any failures of distribution, then this worry could be handled.The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
We definitely agree that traditional philanthropy is problematic. Though we think that EA is just an optimized version, rather than an innovation, and just as any other optimization coming to the world now is insufficient to direct the change we need, so too is EA.Anti-capitalist critics of effective altruism are absurdly overconfident—one might even say “hubristic”, to turn around their favorite accusation—about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
Political assessments of philanthropy are problematic, generally.In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.
Mouth breathers are problematic. Pick a damn shovel. Yes.
Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
Decision theory is cool!It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.
Yes, lets preserve humanity at all costs. While Benatar’s asymmetry arguments are persuasive (with some adjustments), and should lead many humans to seriously reconsider procreating, we should promote that in the context of saving rather than destroying humanity.Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more.
Our moral circle is the extent of human existence, and our mode of impact starts with our local neighbors; though as our sphere of impact expands, those far away become our neighbor.
cf response to 12Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.
Reason alone can save us.Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
Those who are good at making money should make as much as they can as long as they are only keeping $10m, cf response to 12.
If someone strives to keep more than $10m, then we - those who give away in excess of $10m - should do everything in our legal power to destroy them.Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)
Without the specific condition and accountability that one give away in excess of $10m to their neighbors, then we will in fact see and have seen inevitable corruption: SBF.Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
With our adjustments, sure.When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
cf response to 12Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
cf, response to 12Deliberately or negligently making the world worse is vicious, bad, and wrong.
Of course, this is what we say EA does…but yes, agreed in principle.Most (all?) of us are not as effectively beneficent as would be morally ideal.
Of course without a specific, universal criteria any such judgement is anemic.Our moral motivations are very shaped by social norms and expectations—by community and culture.
Those who lack sufficient cognitive capacities (most people) have moral motivations shaped by social norms and expectations—by community and culture.This means it is good and virtuous to be public about one’s efforts to do good effectively.
Yes, cf response to 12If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
Do not stay quiet; and do not quiet others: convince them. Especially if you’ve deemed them unreasonable, as most people are, indeed, unreasonable.In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
Specifically, getting $10m and competing to get the most people they can to the $10m mark.For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
As long as the community remains transparent, reasonable, and non exclusionary.That’s what the “Effective Altruism” community constitutively aims to do.
It does not. The EA community is cultish, overly concerned with personal reputation, and unnecessarily ripe for exploitation.It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
There are no bad apples: only reasonable apples that do the right thing and unreasonable apples conditioned by incentive structures. SBF is unreasonable and conditioned by the incentive structures of EA.Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
At this point EA has become like any other ineffective philanthropic institution. Reasonable people would look for a better way cf response to 12Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;² everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)
Without specific, transparent methods and modes of accountability, any such deterrent will fail.No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community as someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
And so community is not the most effective tool to distribute and manage accountability.
SBF is not a bad apple. There are many, many more. But people should spend time getting to $10m and giving away the excess rather than quibbling over this point, UNLESS doing so would change your mind to adjust from an EA model to an innovation ethics model.The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
In theory, not practice. And all that matters, in this case, is practice.The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
While they are annoying, that is not why I reject EA, as I hope as been clear, but would gladly respond to any questions.If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.
cf response to 12None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)
Nor should anything, but that’s another discussion.