I am currently writing up a response to criticism of this post and will have it up shortly.

Why hold EA to a high standard?

“Movement drama” seems to be depressingly common — whenever people set out to change the world, they inevitably pick fights with each other, usually over trivialities. What’s the point, beyond mere disagreeableness, of pointing out problems in the Effective Altruism movement? I’m about to start some movement drama, and so I think it behooves me to explain why it’s worth paying attention to this time.

Effective Altruism is a movement that claims that we can improve the world _more effectively _with empirical research and explicit reasoning. The slogan of the Center for Effective Altruism is “Doing Good Better.”

This is a moral claim (they say they are doing good) and a claim of excellence (they say that they offer ways to do good better.)

EA is also a proselytizing movement. It tries to raise money, for EA organizations as well as for charities; it also tries to “build the movement”, increase attendance at events like the EA Global conference, get positive press, and otherwise get attention for its ideas.

The Atlantic called EA “generosity for nerds”, and I think that’s a fair assessment. The “target market” for EA is people like me and my friends: young, educated, idealistic, Silicon Valley-ish.

The origins of EA are in academic philosophy. Peter Singer and Toby Ord were the first to promote the idea that people have an obligation to help the developing world and reduce animal suffering, on utilitarian grounds. The leaders of the Center for Effective Altruism, Giving What We Can, 80,000 Hours, The Life You Can Save, and related EA orgs, are drawn heavily from philosophy professors and philosophy majors.

What this means, first of all, is that we can judge EA activism by its own standards. These people are philosophers who claim to be using objective methods to assess how to do good; so it’s fair to ask “Are they being objective? Are they doing good? Is their philosophy sound?” It’s admittedly hard for young organizations to prove they have good track records, and that shouldn’t count against them; but honesty, transparency, and sound arguments are reasonable to expect.

Second of all, it means that EA matters. I believe that individuals and small groups who produce original thinking about big-picture issues have always had outsize historical importance. Philosophers and theorists who capture mindshare have long-term influence. Young people with unusual access to power and interest in “changing the world” stand a good chance of affecting what happens in the coming decades.

So it matters if there are problems in EA. If kids at Stanford or Harvard or Oxford are being misled _or _influenced for the worse, that’s a real problem. They actually are, as the cliche goes, “tomorrow’s leaders.” And EA really seems to be prominent among the ideologies competing for the minds of the most elite and idealistic young people. If it’s fundamentally misguided or vulnerable to malfeasance, I think that’s worth talking about.

Lying for the greater good

Imagine that you are a perfect act-utilitarian. You want to produce the greatest good for the greatest number, and, magically, you know exactly how to do it.

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do _even more _good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake. Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.

Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?

Obviously, this is a terrible failure mode. There’s a reason J.K. Rowling gave her Hitler-like figure Grindelwald the slogan “For the Greater Good.” Ordinary, children’s-story morality tells us that when somebody is lying or hurting people “for the greater good”, he’s a bad guy.

A number of prominent EA figures have made statements that seem to endorse lying “for the greater good.” Sometimes these statements are arguably reasonable, taken in isolation. But put together, there starts to be a pattern. It’s not quite storybook-villain-level, but it has something of the same flavor.

There are people who are comfortable sacrificing honesty in order to promote EA’s brand. After all, if EA becomes more popular, more people will give to charity, and that charity will do good, and that good may outweigh whatever harm comes from deception.

The problem with this reasoning should be obvious. The argument would work just as well if EA did no good at all, and only claimed to do good.

Arbitrary or unreliable claims of moral superiority function like bubbles in economic markets. If you never check the value of a stock against some kind of ground-truth reality, if everyone only looks at its current price and buys or sells based on that, we’ll see prices being inflated based on no reason at all. If you don’t insist on honesty in people’s claims of “for the greater good”, you’ll get hijacked into helping people who aren’t serving the greater good at all.

I think it’s worth being suspicious of anybody who says “actually, lying is a good idea” and has a bunch of intelligence and power and moral suasion on their side.

It’s a problem if a movement is attracting smart, idealistic, privileged young people who want to “do good better” and teaching them that the way to do the most _good is to lie. It’s arguably even _more of a problem than, say, lobbyists taking young Ivy League grads under their wing and teaching them to practice lucrative corruption. The lobbyists are appealing to the most venal among the youthful elite. The nominally-idealistic movement is appealing to the most _ethical, _and corrupting them.

The quotes that follow are going to look almost reasonable. I expect some people to argue that they are in fact reasonable and innocent and I’ve misrepresented them. That’s possible, and I’m going to try to make a case that there’s actually a problem here; but I’d also like to invite my readers to take the paranoid perspective for a moment. If you imagine mistrusting these nice, clean-cut, well-spoken young men, or mistrusting Something that speaks through them, could you see how these quotes would seem less reasonable?

Criticizing EA orgs is harmful to the movement

In response to an essay on the EA forums criticizing the Giving What We Can pledge (a promise to give 10% of one’s income to charity), Ben Todd, the CEO of 80,000 Hours, said:

_Topics like this are sensitive and complex, so it can take a long time to write them up well. It’s easy to get misunderstood or make the organisation look bad._


_At the same time, the benefits might be slight, because (i) it doesn’t directly contribute to growth (if users have common questions, then add them to the FAQ and other intro materials) or (ii) fundraising (if donors have questions, speak to them directly)._


_Remember that GWWC is getting almost 100 pledges per month atm, and very few come from places like this forum. More broadly, there’s a huge number of pressing priorities. There’s lots of other issues GWWC could write about but hasn’t had time to as well._


_If you’re wondering whether GWWC has thought about these kinds of questions, you can also just ask them. They’ll probably respond, and if they get a lot of requests to answer the same thing, they’ll probably write about it publicly._


_With figuring out strategy (e.g. whether to spend more time on communication with the EA community or something else) GWWC writes fairly lengthy public reviews every 6-12 months._

He also said:

_None of these criticisms are new to me. I think all of them have been discussed in some depth within CEA._


_This makes me wonder if the problem is actually a failure of communication. Unfortunately, issues like this are costly to communicate outside of the organisation, and it often doesn’t seem like the best use of time, but maybe that’s wrong._


_Given this, I think it also makes sense to run critical posts past the organisation concerned before posting. They might have already dealt with the issue, or have plans to do so, in which posting the criticism is significantly less valuable (because it incurs similar costs to the org but with fewer benefits). It also helps the community avoid re-treading the same ground._

In other words: the CEO of 80,000 Hours thinks that people should “run critical posts past the organization concerned before posting”, but also thinks that it might not be worth it for GWWC to address such criticisms because they don’t directly contribute to growth or fundraising, and addressing criticisms publicly might “make the organization look bad.”

This cashes out to saying “we don’t want to respond to your criticism, and we also would prefer you didn’t make it in public.”

It’s normal for organizations not to respond to every criticism — the Coca-Cola company doesn’t have to respond to every internet comment that says Coke is unhealthy — but Coca-Cola’s CEO doesn’t go around shushing critics either.

Todd seems to be saying that the target market of GWWC is not readers of the EA forum or similar communities, which is why answering criticism is not a priority. (“Remember that GWWC is getting almost 100 pledges per month atm, and very few come from places like this forum.”_) _Now, “places like this forum” seems to mean communities where people identify as “effective altruists”, geek out about the details of EA, spend a lot of time researching charities and debating EA strategy, etc. Places where people might question, in detail, whether pledging 10% of one’s income to charity for life is actually a good idea or not. Todd seems to be implying that answering the criticisms of these people is not useful — what’s useful is encouraging outsiders to donate more to charity.

Essentially, this maps to a policy of “let’s not worry over-much about internally critiquing whether we’re going in the right direction; let’s just try to scale up, get a bunch of people to sign on with us, move more money, grow our influence.” An uncharitable way of reading this is “c’mon, guys, our marketing doesn’t have to satisfy you, it’s for the marks!” Jane Q. Public doesn’t think about details, she doesn’t nitpick, she’s not a nerd; we tell her about the plight of the poor, she feels moved, and she gives. _That’s _who we want to appeal to, right?

The problem is that it’s not quite fair to Jane Q. Public to treat her as a patsy rather than as a peer.

You’ll see echoes of this attitude come up frequently in EA contexts — the insinuation that criticism is an inconvenience that gets in the way of movement-building, and movement-building means obtaining the participation of the uncritical.

In responding to a criticism of a post on CEA fundraising, Ben Todd said:

_I think we should hold criticism to a higher standard, because criticism has more costs. Negative things are much more memorable than positive things. People often remember criticism, perhaps just on a gut level, even if it’s shown to be wrong later in the thread._

This misses the obvious point that criticism of CEA has costs to CEA, but possibly has benefits to other people _if CEA really has flaws. It’s a sort of “EA, _c’est moi” narcissism: what’s good for CEA is what’s good for the Movement, which is what’s good for the world.

Keeping promises is a symptom of autism

In the same thread criticizing the Giving What We Can pledge, Robert Wiblin, the director of research at 80,000 Hours, said:

_Firstly: I think we should use the interpretation of the pledge that produces the best outcome. The use GWWC and I apply is completely mainstream use of the term pledge (e.g. you ‘pledge’ to stay with the person you marry, but people nonetheless get divorced if they think the marriage is too harmful to continue)._


_A looser interpretation is better because more people will be willing to participate, and each person gain from a smaller and more reasonable push towards moral behaviour. We certainly don’t want people to be compelled to do things they think are morally wrong – that doesn’t achieve an EA goal. That would be bad. Indeed it’s the original complaint here._


_Secondly: An “evil future you” who didn’t care about the good you can do through donations probably wouldn’t care much about keeping promises made by a different kind of person in the past either, I wouldn’t think._


_Thirdly: The coordination thing doesn’t really matter here because you are only ‘cooperating’ with your future self, who can’t really reject you because they don’t exist yet (unlike another person who is deciding whether to help you)._


_One thing I suspect is going on here is that people on the autism spectrum interpret all kinds of promises to be more binding than neurotypical people do (e.g. [https://www.reddit.com/r/aspergers/comments/46zo2s/promises/](https://www.reddit.com/r/aspergers/comments/46zo2s/promises/)). I don’t know if that applies to any individual here specifically, but I think it explains how some of us have very different intuitions. But I expect we will be able to do more good if we apply the neurotypical intuitions that most people share._


_Of course if you want to make it fully binding for yourself, then nobody can really stop you._

In other words: Rob Wiblin thinks that promising to give 10% of income to charity for the rest of your life, which the Giving What We Can website describes as “a promise, or oath, to be made seriously and with every expectation of keeping it”, does not literally mean committing to actually do that. It means that you can quit any time you feel like it.

He thinks that you should interpret words with whatever interpretation will “do the most good”, instead of as, you know, what the words actually mean.

If you respond to a proposed pledge with “hm, I don’t know, that’s a really big commitment”, you must just be a silly autistic who doesn’t understand that you could just break your commitment when it gets tough to follow! The movement doesn’t depend on weirdos like you, it needs to market to _normal _people!

I don’t know whether to be more frustrated with the ableism or the pathologization of integrity.

Once again, there is the insinuation that the growth of EA depends on manipulating the public — acquiring the dollars of the “normal” people who don’t think too much and can’t keep promises.

Jane Q. Public is stupid, impulsive, and easily led. That’s why we want her.

“Because I Said So” is evidence

Jacy Reese, a prominent animal-rights-focused EA, responded to some criticism of Animal Charity Evaluators’ top charities on Facebook as follows:

_Just to note, we (or at least I) agree there are serious issues with our leafleting estimate and hope to improve it in the near future. Unfortunately, there are lots of things that fit into this category and we just don’t have enough research staff time for all of them._


_I spent a good part of 2016 helping make significant improvements to our outdated online ads quantitative estimate, which now aggregates evidence from intuition, experiments, non-animal-advocacy social science, and veg pledge rates to come up with the “veg-years per click” estimate. I’d love to see us do something similar with the leafleting estimate, and strongly believe we should keep trying, rather than throwing our hands in the air and declaring weak evidence is “no evidence.”_

For context here, the “leafleting estimate” refers to the rate at which pro-vegan leaflets cause people to eat less meat (and hence the impact of leafleting advocacy at reducing animal suffering.) The studies ACE used to justify the effectiveness of leafleting actually showed that leafleting was ineffective: an uncontrolled study of 486 college students shown a pro-vegetarianism leaflet found that only one student (0.2%) went vegetarian, while a controlled study conducted by ACE itself found that consumption of animal products was no lower in the leafleted group than the control group. The criticisms of ACE’s leafleting estimate were not merely that it was flawed, but that it literally fabricated numbers based on a “hypothetical.” ACE publishes “top charities” that it claims are effective at saving animal lives; the leafleting effectiveness estimates are used to justify why people should give money to certain veganism-activist charities. A made-up reason to support a charity isn’t “weak evidence”, it’s lying.

In _that _context, it’s exceptionally shocking to hear Reese talking about “evidence from intuition,” which is…not evidence.

Reese continues:

_Intuition is certainly evidence in this sense. If I have to make quick decisions, like in the middle of a conversation where I’m trying to inspire someone to help animals, would I be more successful on average if I flipped a coin for my responses or went with my intuition?_

But that’s not the point. Obviously, my _intuition is valuable to _me in making decisions on the fly. But my _intuition is not a reason why anybody else should follow my lead. For that, I’d have to give, y’know, _reasons.

This is what the word “objectivity” means. It is the ability to share data between people, so that each can independently judge for themselves.

Reese is making the same kind of narcissistic fallacy we saw before. Reese is forgetting that his readers are not Jacy Reese and therefore “Jacy Reese thinks so” is not a compelling reason to them. Or perhaps he’s hoping that his donors can be “inspired” to give money to organizations run by his friends, simply because he tells them to.

In a Facebook thread on Harrison Nathan’s criticism of leafleting estimates, Jacy Reese said:

_I have lots of demands on my time, and like others have said, engaging with you seems particularly unlikely to help us move forward as a movement and do more for animals._

Nobody is obligated to spend time replying to anyone else, and it may be natural to get a little miffed at criticism, but I’d like to point out the weirdness of saying that criticism doesn’t “help us move forward as a movement.” If a passenger in your car says “hey, you just missed your exit”, you don’t complain that he’s keeping you from moving forward. That’s the whole point. You might be moving in the wrong direction.

In the midst of this debate somebody commented,

_“Sheesh, so much grenade throwing over a list of charities!  I think it’s a great list!”_

This is a nice, Jane Q. Public, kind of sentiment. Why, indeed, should we argue so much about charities? Giving to charity is a nice thing to do. Why can’t we all just get along and promote charitable giving?

The point is, though — it’s giving to a good _cause that’s a praiseworthy thing to do. Giving to an _arbitrary cause is not a good thing to do.

The whole point of the “effective” in “Effective Altruism” is that we, ideally, care about whether our actions actually _have good consequences or not. We’d like to help animals or the sick or the poor, in _real life. You don’t promote good outcomes if you oppose objectivity.

So what? The issue of exploitative marketing

These are informal comments by EAs, not official pronouncements. And the majority of discussion of EA topics I’ve seen is respectful, thoughtful, and open to criticism. So what’s the big deal if some EAs say problematic things?

There are some genuine scandals within the EA movement that pertain to deceptive marketing. Intentional Insights, a supposed “EA” organization led by history professor Gleb Tsipursky, used astroturfing, paid for likes and positive comments, made false claims about his social media popularity, falsely claimed affiliation with other EA organizations, and may have required his employees to “volunteer” large amounts of unpaid labor for him.

To their credit, CEA repudiated Intentional Insights; Will McAskill’s excellent post on the topic argued that EA needs to clarify shared values and guard against people co-opting the EA brand to do unethical things. One of the issues he brought up was

_People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)_

which is a perfect description of Tsipursky’s behavior.

I would argue that the problem goes beyond Tsipursky. ACE’s claims about leafleting, and the way ACE’s advocates respond to criticism about it, are very plausibly examples of dishonesty defended with “ends justify the means” rhetoric.

More subtly, the most central effective altruism organizations and the custodians of the “Effective Altruism” brand are CEA and its offshoots (80,000 Hours and Giving What We Can), which are primarily focused on movement-building. And sometimes the way they do movement-building risks promoting an exploitative rather than cooperative relationship with the public.

What do I mean by that?

When you communicate cooperatively with a peer, you give them “news they can use.” Cooperative advertising is a benefit to the consumer — if I didn’t know that there are salons in my neighborhood that specialize in cutting curly hair, then you, as the salon, are _helping _me by informing me about your services. If you argue cooperatively in favor of an action, you are telling your peer “hey, you might succeed better at your goals if you did such-and-such,” which is helpful information. Even making a request can be cooperative; if you care about me, you might want to know how best to make me happy, so when I express my preferences, I’m offering you helpful information.

When you communicate exploitatively with someone, you’re trying to gain from their weaknesses. Some of the sleazier forms of advertising are good examples of exploitation; if you make it very difficult to unsubscribe from your service, or make spammy websites whose addresses are misspellings of common website names, or make the “buy” button large and the “no thanks” button tiny, you’re trying to get money out of people’s forgetfulness or clumsiness rather than their actual interest in your product. If you back a woman into an enclosed space and try to kiss her, you’re trying to get sexual favors as a result of her physical immobility rather than her actual willingness.

Exploitativeness is treating someone like a mark; cooperativeness is treating them like a friend.

A remarkable amount of EA discourse is framed cooperatively. It’s about helping each other figure out how best to do good. That’s one of the things I find most impressive about the EA movement — compared to other ideologies and movements, it’s unusually friendly, exploratory, and open to critical thinking.

However, if there are signs that EA orgs, as they grow and professionalize, are deliberately targeting growth among less-critical, less-intellectually-engaged, lower-integrity donors, while being dismissive towards intelligent and serious critics, which I think some of the discussions I’ve quoted on the GWWC pledge suggest, then it makes me worry that they’re trying to get money out of people’s weaknesses rather than gaining from their strengths.

Intentional Insights used the traditional tactics of scammy, lowest-common-denominator marketing. To a sophisticated reader, their site would seem lame, even if you didn’t know about their ethical lapses. It’s buzzwordy, clickbaity, and unoriginal. And this isn’t an accident, any more than it’s an accident that spam emails have poor grammar. People who are fussy about quality aren’t the target market for exploitative marketing. The target market for exploitative marketing is and always has been the exceptionally unsophisticated. Old people who don’t know how to use the internet; people too disorganized to cancel their subscriptions; people too impulsive to resist clicking on big red buttons; sometimes even literal bots.

The opposite approach, if you don’t _want to drift towards a pattern of exploitative marketing, is to target people who seek out hard-to-fake signals of quality. In EA, this would mean paying attention to people who have high standards in ethics and accuracy, and treating _them as the core market, rather than succumbing to the temptation to farm metrics of engagement from whomever it’s easiest to recruit in the short-term.

Using “number of people who sign the GWWC pledge” as a metric of engagement in EA is nowhere near as shady as paying for Facebook likes, but I think there’s a similar flavor of exploitability between them. You don’t want to be measuring how good you are at “doing good” by counting how many people make a symbolic or trivial gesture. (And the GWWC pledge isn’t trivial or symbolic for most people…but it might become so if people keep insisting it’s not meant as a real promise.)

EAs can fight the forces of sleaze by staying cooperative — engaging with those who make valid criticisms, refusing the temptation to make strong but misleading advertising claims, respecting rather than denigrating people of high integrity, and generally talking to the public like we’re reasonable people.

CORRECTION

A previous version of this post used the name and linked to a Facebook comment by a volunteer member of an EA organization. He isn’t an official employee of any EA organization, and his views are his own, so he viewed this as an invasion of his privacy, and he’s right. I’ve retracted his name and the link.