GAAP compliance has no bearing on accuracy. Accounting intangibles are suggestions at best, the market discovers the true value of assets. Where's the market on bad EA ideas?
The linked article was explicitly stating a philosophical argument in favor of effective altruism as a good philosophy to live by. I see this equivalent to eugenics - the ends justify the means, afterall.
If we are going to live in a society where some people pretend they can calculate the benefit of all their actions, then I am free to say how absolutely fucked that entire process is, and point out the real world damage that is has explicitly caused.
If someone was openly a National Socialist, we wouldn't have any problem pointing out the problems the Nazis caused. But with EAs who recently lit tens of billions of dollars on fire to buy penthouses on Caribbean islands and castles in rural England, we have to give them a pass? Excuse me if I'm not open to listening to their thoughtful commentary.
It looks like you've interacted with some nasty folks who were EAs - I don't think this was a typical cross-section of EAs. Sorry to hear about your bad experiences
As one example, a friend of mine lost over $20 million to SBF / FTX, which funded said castle. However he did not sell his claim and is optimistic he will get most of it back. Money seems like a number on a page until you see how it affects real people!
As another example, I had an EA VC investor nuke one of my seed investments in order to have a different one of his portcos acquire the IP for scrap because he had a far higher stake in the other portco, which was in the same vertical. Ruined a lot of employees' ISOs as well as my equity. But hey he's an EA - can't blame him! His expected value was simply higher.
No, you can blame EAs for making stupid decisions and being generally bad people if they are making stupid decisions and being generally bad people. I'll second that I'm sorry for the bad experiences you've had with EAs.
In my experiences, lots of them are simply good people trying to do more good. I hope you can have better experiences with them in the future.
Take anything you like. A cause, religious group, political party, sports fanbase, etc.
Some subset of each group are generally decent people, who do their best to be good to the people around them, and to live their lives in a generally moral and ethical manner.
Some subset of each group have no interest in morals or ethics, and attach themselves to the group for purely selfish reasons.
Judging the entire group based on the selfish or amoral subset is a logical fallacy of the most basic sort, and is bordering religiosity. This is even more problematic when you look at the kinds of negative EA situations that have (rightly) caused controversy. The high profile big money cases get attention.
If confronted with someone who ascribes to the EA philosophy and by all measurable indicators has done incredibly good things for the world, would you be willing to change your mind?
Yes! This was the realization that started moving me away from many flavors of consequentialist utilitarianism.
There is to me no ontological distinction between means and ends, or the cause and the effect as an action. There is a vast literature prioritizing the latter at the expense of the former, an illusion brought upon us by the arrow of time our consciousness happens to be travelling in the same direction as. But if we're going to prioritize along that axis we might as well claim actions to the left of us are privileged relative to actions to the right of us.
I admit this is splitting philosophical hairs. "Doing X at time T means at time T + 1 many many sentient beings will benefit" is not really a hard-consequentialist argument - we don't prioritize X because it is at time T. But now consider: "if future sentient beings no longer care about Y, then doing X now will benefit nobody; deciding on not-X, however, saves resources we can then use towards whatever we do care about." Now totalize that, and add in a modesty claim: As the people at the start of history, we (maybe) have no control over what moral concerns future beings will have at all, and if we do then we can (maybe) can choose to craft a future where no moral concerns are detected by any sentient beings in the future at all.
Is that good? I have no idea, but it's a lot more fun to try to puzzle through.
My brother has a kidney ailment, so the expected value of me donating to a stranger is less than the expected value of me donating to my brother. This is because my brother has the potential for children. If he has children, they may become doctors. If they become doctors, they will potentially save a huge number of people. Of course the risk of them not becoming doctors is high. So I should investigate who around me is a doctor that needs my kidney. But then I need to assess if their doctoring is based upon sound methodology, and the correlation of their GPA vs. their schooljg mathcex the ideal numberr of utils to balances dfcsslmksn;kzknfvknzlgn;lz;lnzglzll
This is about as bad and ethically compromised a take as someone who hates crypto-bros and would immediately cease all contact with anyone they identify as pro-cryptocurrency (say believe it superior to the current financial system) and blackball them from all business interests. Justifying doing so to protect their wealth because they know for a fact that all those damn crypto-bros are utter pieces of shit.
I have a good friend who is a communist I strongly disagree with. I believe quite-bad things about communism as a philosophy and what it inevitably leads to. As of yet I have not excluded all people who are communists from my life or business interests due to believing their philosophy makes all or them bad people. I can say the same about Christians or Jehovah's Witnesses or Randian libertarians or Kantians.
I strongly disagree with all their ethical worldviews and metaphysics, but I don't automatically believe people are evil in those categories nor believe they would stab me in the back if they could, even it I believe the philosophies in question can lead to more backstabbing behavior.
It's simply discrimination and would morally compromise me to hate any group of people for what they believe. It wouldn't surprise me if also very illegal, especially if you decline to hire someone who identifies as EA.
I would be moderately surprised if association with Effective Altruism was very much more predictive of sociopathy than association with Monero.
Which isn't to say that either is so unforgiveably horrible, just that it's very funny to see someone try to make this case without a shred of self-awareness.