Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".


It's also not a very big leap from "My purpose is to do whatever is the greatest good" to "It doesn't matter if I hurt people as long as the overall net result is good (by some arbitrary standard)"


99% of effective altruists and rationalists agree that you shouldn't hurt people as part of some complicated scheme to do good. For example, here is eliezer yudkowsky in 2008 saying exactly that, and explaining why it's true: https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t...


I believe they believe that, on its face.

I also believe that idealistic people will go to great lengths to convince themselves that their desired outcome is, in fact, the moral one. It starts by saying things like, "Well, what is harm, actually..." and then constructing a definition that supports the conclusions they've already arrived at.

I'm quite sure Sam Bankman-Fried did not believe he was harming anybody when he lost/stole/defrauded his investors and depositors' money.


Like an very old dude once said: "No one is willfully evil."


This isn’t a hypothetical leap either. This thinking directly lead to the murders committed by the zizians.


I think this is the key comment so far.


It seems very odd to criticize the group that most reliably and effectively funds global health and malaria prevention for this.

What is your alternative? What's your framework that makes you contribute to malaria prevention more or more effectively than EAs do? Or is the claim instead that people should shut down conversation within EA that strays from the EA mode?


The simple answer is you don't need a "framework" -- plain empathy for the less fortunate is good enough. But if the EA's actually want to do something about malaria (although the Gates Foundation does much, much more in that regard than the Centre for Effective Altruism), more power to them. But as Becker notes from his visits to the Centre, things like malaria and malnutrition are not the primary focus of the centre.


EA people gave a total of $817,276,989 to malaria initiatives through GiveWell[1][2].

How much more do they need to give before you will change your mind about whether “EA's actually want to do something about malaria”?

[1] https://www.givewell.org/all-grants-fund

[2] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...


I've used GiveWell for donations and don't consider myself an Effective Altruist. Does GiveWell get to count for the just the EA community?


By analogy, if a Catholic Church created a charity for curing malaria, and I donated money to it, that wouldn't make me Catholic. But still the existence of the charity, especially if people donated over a billion dollars to it, would be a credible argument against people saying "Catholics do nothing about curing malaria". Does that make sense?


I wish it were, but it's clearly not enough. There are plenty of people with healthy emotional empathy in the world, and yet children still die of easily preventable diseases.

I am plenty happy to simp for the Gates foundation, but I think it's important to acknowledge that becoming Bill Gates to support charity is not a strategy the average person can replicate. The question for me is how do I live my life to support the causes I care about, not who lives more impressive lives than me.


I think the group that most reliably and effectively funds global health -- at least in terms of total $ -- would be the United Nations, or perhaps the Catholic Church, or otherwise one national government or another.

If you exclude "nations" then it does look to be the Church: "The Church operates more than 140,000 schools, 10,000 orphanages, 5,000 hospitals and some 16,000 other health clinics". Caritas, the relevant charitable umbrella organization, gives $2-4b per year on its own, and that's not including the many, many operations run by religious orders not under that umbrella, or by the hundreds of thousands of parishes around the world (most of which operate charitable operations of their own).

And yet, rationalists are totally happy criticizing the Catholic Church -- not that I'm complaining, but it seems a bit hypocritical.


I appreciate the good these organizations do, but I don't think that's the right measure of it. A person wouldn't in expectation serve global health better by becoming Catholic than by joining EA. That Catholicism is large isn't the same as them being effective at solving malaria. EA is tiny relative to the Church but still manages to support within an order of magnitude the funding you mentioned here, with exact numbers depending on how you count.

Similarly, it's not like government funding is an overlooked part of EA. Working on government and government aid programs is something EA talks about, high leverage areas like policy especially. If there's a more standard government role that an individual can take that has better outcomes than what EAs do, that would be an important update and I'd be interested in hearing it. But the criticism that EA is just not large enough is hard to action on, and more of a work in progress than a moral failing.


Rationalists and EAs spend far more time praising the Catholic Church and other religious groups than criticizing them - since they spend essentially no time criticizing them, and do occasionally praise them.


How do they escape the reality that the Earth will one day be destroyed, and that it's almost certainly impossible to ever colonize another planetary system? Just suicide out?


If you value maximizing the number of human lives that are lived, then even “almost certainly impossible” is enough to justify focusing a huge amount of effort on that. Maybe interstellar colonization is a one in a million shot, but it would multiply the number of human lives by billions or trillions or more.


Is the argument that we should try to do things that will benefit our theoretical and theoretically multitudinous descendants? Or is it that just taking action to make their existence more likely is a moral good? Because the latter is just brain dead.


Good question. I think it has to be the latter, given the immense time involved. You can make a connection between driving progress in certain areas today and increasing the odds that humanity eventually colonizes the stars. I don’t think you can make any connection with how well off those far-future humans will be.


If that's what's meant, it's a hilarious perversion of utilitarianism.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: