Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If that interests you, ask for a chart of the real-world successes of LessWrong readers compared to non-LW readers.

Also about the mosquito nets versus AI dialectic in Effective Altruism.



As a LessWrong reader I agree that real-world success should be the defining metric, but want to point out that you can't just compare "LessWrong readers to non-LW readers" - if LW readers are not successful IRL, then it's entirely possible that there's some personality factor that both makes you not successful and makes you read LW.

Probably the thing to measure is LW vs. other productivity blogs with a similar audience.


> Also about the mosquito nets versus AI dialectic in Effective Altruism.

There's a reasonable debate here about short-term and long-term thinking. "How to save the most lives the most quickly" is not the only reasonable altruistic goal to optimize for. "How to save the most lives over the long-term, and save them permanently rather than just delaying a current inevitable" is worth consideration. And there are already far more people willing to support mosquito nets, and far too few willing to support AGI research, or SENS for that matter.


"AGI research" in practice means "give money to MIRI", whose track record of results on pretty much any measure is less than impressive.

It is really (and literally) "donate to stuff that demonstrably works" versus "donate to MIRI, with its terrible track record, to do something supported primarily by Pascalian arguments."

c.f. http://www.vox.com/2015/8/10/9124145/effective-altruism-glob...

Yudkowsky may have (probably) coined the phrase "effective altruist", but people who aren't living sci-fi dreams are in EA now, and asking rather pointed questions.

Never confuse "Effective Altruism" and "altruism that is effective". Whatever "effective" actually means in the given context.

Getting back on-topic, there's still no evidence - e.g., a track record of results - that anything within a mile of MIRI/LW is actually any good at all for real-world effectiveness, and things like mosquito nets versus AI as evidence against it. LW sells a sort of "rationalitiness": it sure feels like rationality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: