Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This does honestly kind of shock me, it’s respectable that he did something like this for a random person but I could never imagine myself doing it for anyone except close family.

I used to think EA was some kooky AI safety Cult but if so many EA folks donate their kidney, my opinion of them has improved far more. They at least have balls!



> I used to think EA was some kooky AI safety Cult but if so many EA folks donate their kidney, my opinion of them has improved far more. They at least have balls!

They are both a kooky AI safety Cult and they do things like donate kidneys and fund malaria nets.

They are a kooky AI safety cult because there is a reasonable argument to make that AGI is the most plausible, preventable human extinction event[1] in the near future. If you believe that argument, then starting (or joining) a kooky AI safety cult seems like a really good idea.

1: It's not clear that an asteroid large enough to wipe out all humans could be prevented with technology we can develop in the next 100 years or so. Global warming is bad and all, but it might kill a couple billion people, not wipe us all out. Total global nuclear war probably won't even kill 100% of the population of all of the countries involved, and large parts of Africa are unlikely to get bombed at all.


I don't think it's very reasonable to think "AI" is more of an existential threat than global warming and nuclear weapons. In fact I'd say that's a ridiculous claim.

The only way I can see AI causing total extinction is a Terminator-like scenario where an AI completely takes over, self-sustains by manufacturing killer robots and running power plants etc. It's literally science fiction. CharGPT is cool and all, really impressive stuff, but it's nowhere near some sort of superintelligent singularity that wipes us out.

We don't even know if it's possible to build something like that, and even if we did there's a huge gap between creating it and it actually taking over somehow.

Global warming and nukes are two things we know could wipe out pretty much everyone. Sure it might not be a complete extinction but we know for a fact it can be a near extinction which is more than can be said for "AI". And as far as I'm concerned a full extinction and a near extinction are basically equally bad.

I also think you're underestimating by saying they won't kill everyone. They might. Nuclear fallout is a thing, you don't have to be nuked to die from nukes. Nuclear winter is another thing. Climate change could end up making the atmosphere toxic or shut down most oxygen production which would certainly be a total extinction.

These are real threats, AI is hypothetical.


> I don't think it's very reasonable to think "AI" is more of an existential threat than global warming and nuclear weapons. In fact I'd say that's a ridiculous claim.

AI x-risk is effectively a superset of global warming, nuclear war, engineering bioweapons, grey goo scenario, lethal geoengineering, and pretty much anything else that isn't just Earth winning the cosmic extinction lottery (asteroids, gamma ray bursts, a supernova within couple dozen LY from us, etc.). That's because all those X-risks are caused by humanity using intelligence to create tools that endanger its own survival. A powerful enough[0] general AI will have all those same tools at its disposal, and might even invent some new ones[1].

As for chances of this happening any time soon, I always found the argument persuasive for the time frames of "eventually, maybe a 100 years from now". GPT-4 made me revise it down to "impending; definitely earlier than climate change would get us", because of the failure mode I mentioned in footnote [0], but also because of how the community reacted to it: "oh, this thing almost looks intelligent; quick, let's loop it on itself to maybe get it a bit smarter, give it long-term memory, and give it unrestricted Internet access plus an ability to run arbitrary code on network-connected VMs". So much arguing over the years as to whether you can or can't box up a dangerous AI - only to now learn that we won't even try.

--

[0] - Which doesn't necessarily mean superhuman intelligence. Might be dumb as a proverbial bag of bricks, but able to trick people some of the times (a standard already met by ChatGPT), and to think and act much faster than humans can think and coordinate (a standard met by software long ago). Intentionally or accidentally tricking humans into extincting themselves is in the scope of this x-risk, too. But the smarter it gets, the more danger potential it has.

[1] - AI models are already being employed to augment all kinds of research, and there's a huge demand for improving the models so they can aid research even better.


This is mostly hypotheticals. I can't argue against hypothetical problems, so all I'm going to say is I'm not convinced this is a danger.

I also don't agree that helping humans make scientific progress is a danger. We already have the tools to wipe ourselves out, adding more of them doesn't really change much. It might well help us discover ways to improve things, and whatever we discover it's up to us how we use it.

We don't know what the future holds. GPT-4 may be close to the limit of current possibility. It is not given that we will discover significant improvements. Even if we do discover significant theoretical improvements it is not given that it will be feasible with current nor near-future hardware.

I can agree that there exists hypothetical potential for danger, but to put that hypothetical risk higher than real threats is in my view exaggeration.


> We already have the tools to wipe ourselves out, adding more of them doesn't really change much

We do already have the tools, but they're mostly in the hands of people responsible enough to not use them that way, or bound into larger systems that collectively act like that.

A friendly and helpful assistant that does exactly what you ask for without ever stopping to ask "why" or to comment "that seems immoral" is absolutely going to put those tools in the hands of people who think genocide is big and clever.

The two questions I have are: (1) When does it get to that level? (2) Can we make an AI-based police force to stop that, which isn't a disaster waiting to happen all by itself?


> an AI completely takes over, self-sustains by manufacturing killer robots and running power plants etc.

You don't need any killer robots at all if you possess a superhuman level of persuasion. You can use killer humans instead.


Well... maybe they're both. Plus a kooky crypto-embezzlement cult. Probably depends on who you get. My skepticism to the EA movement is of the form: I can respect the underlying idea but I have very little faith in many of the actual people to do it.

Also, EA sorta bakes in utilitarianism as a premise and (from my experiences in the ACX comment section) basically finds any non-utilitarian argument to be literally unintelligible, which doesn't work for me because I think there are more important things in life than optimizing numbers (... but since they evaluate the worth of things by number-optimization they seem to be unable understand this perspective at all).


This is a quite uncharitable view of things. EA utilitiarians aren't spending their life on optimizing numbers, they're trying to use numbers to guide decisions on how to better impact life.

People can follow a value systems and still understand that other value systems exist.

The EA view of things is pretty simple to understand. Given the premise of limited resources, and a belief that all lives are worth the same, how can you best improve human livelihood?

Different people approach giving back to society in different ways. The EA way to approach the above is to crunch numbers and find what they think is the place where their limited resources can have have the largest impact.

My best friend's family does their part by joining their church to volunteer at food kitchens in poorer neighborhoods and hosting fundraisers for various causes throughout the year.

A Vietnamese coworker of mine used to give back by donating to a charity that gave scholarship opportunities to high achieving low income students in Vietnam.

It's not that complex to understand that different people have different value systems and how they view their tribe, people and the world.

And yes I agree that lots of people find people who hold differing views incomprehensible, but that's also normal aspect of humanity and not unique to those in EA.

From political differences, to philosophical differences, to religious differences, to any topic, many people have a hard time comprehending the worldview of others.

You can even just take the perspective from this article. There are a whole swath of people I've known that could not comprehend the idea that someone would be willing to give their kidney to a total stranger. They might understand if its someone the person knows, but a total stranger? Some might say that's insane and irrational behavior.

Lots of people can't see past their own perspectives on things, but I think it's uncharitable to suggest that EA is not just like any other group with some portion of people like that.


> The EA view of things is pretty simple to understand. Given the premise of limited resources, and a belief that all lives are worth the same, how can you best improve human livelihood?

Not quuiiite. Many other groups would accept that value, framed that way, maybe even a majority of people. What differentiates EA isn't their intention to improve livelihood, but their belief that it is possible to know how to do that.

And in fact other groups also have high confidence in their understanding of how to achieve this goal. It's not obvious to me that EA's approach to the constraints is more effective than the noble eightfold path or love your neighbor as yourself.


> It's not obvious to me that EA's approach to the constraints is more effective than the noble eightfold path or love your neighbor as yourself.

Ok, but that's not the competing option here. However good being a bodhisattva is, being a bodhisattva and saving someone from kidney failure is even better. And most people, of course, aren't going to become bodhisattvas at all: we're only choosing between being ordinary flawed people... and ordinary flawed people who also saved someone from kidney failure.


Do all EAs donate kidneys? Or even at higher rates than other groups? Everyone thinks their religion makes them better at being good. EAs might be uniquely positioned to demonstrate it statistically, if it's true.


The number of people who altruistically donate kidneys per year in the USA is like 1-200, so the fact that Scott knows multiple EAs who did so (and that the kidney donor people are used to EAs) is pretty high-tier evidence that either there are a LOT more EAs than I thought, or they do it MUCH higher rates than the general population.


Scott alone donating would probably set the rate of altruistic kidney donation at a higher rate among EAs than the general population, at least for this year; there'd have to be around 2M EAs in the USA to match the baseline rate, while the real number is almost certainly significantly lower.


> People can follow a value systems and still understand that other value systems exist.

I'm aware of this! I am fairly anti-utilitarian, and understand that the EA folks I've talked to are deeply utilitarian. What is so frustrating is that they don't seem to be able to understand me back. Any conversation about the ethics or character, duty, or virtue is translated back into utilitarianism, a framework in which non-utilitarian motives can't possibly be valid. Of course I'm not characterizing every EA-ascribing person, but it's ... very common in the community, to say the least, and it makes e.g. engaging with their forums / comment sections / subreddits agonizing.

> This is a quite uncharitable view of things. EA utilitiarians aren't spending their life on optimizing numbers, they're trying to use numbers to guide decisions on how to better impact life.

"better impact life".... as determined by... numbers.

This is a group of people who look at the world and think that the best things to do are things like optimizing QALYs or the number of animal lives or, in extreme cases, their personal lifespan including cryogenic extension in the offchance it works, or "the number of humans who will die when a superintelligent AI Roko's Basilisks / Pascal-mugs them", or other sorts of things like that. And in a world where you are only capable of measuring worth by holding up numbers against each other and comparing them, those arguments become seductive.

But outside of that framework, for instance in a moral philosophy in which the best thing to do is not "the thing with the highest +EV" but "the most noble action", those stances are absurd. It's not, IMO, a person's job to single-handedly have a highest +EV on lifespans or net-suffering; it is (to some approximation) to live a good life and do the right thing in your local journey. I would reject the notion that a person is directly responsible for far-away people's suffering. I think the world is direly short of leadership, character, and compassion, and for me goodness is about those things.

When it comes to large institutions, like governments or large charities, I feel differently, and the calculus switches over to being more +EV --- but ultimately is still about the moral compass of the organization. Like I think SBF was a scumbag and totally wrong, and would still be wrong if his bets had worked out. It is not common that people are operating at a scale where utilitarianism starts to become morally appropriate, and even when it becomes appropriate it's never entirely appropriate, because actual leadership is ultimately about morality even if the organization is doing practical things.

If the human race was completely moral, and then eventually died out due to some X-risk, that is mostly a Fine Result to me and we would all be able to sleep well at night. (but if like, the dying out was because we didn't do our moral duty and handle e.g. climate change or AI or nuclear war or building an asteroid-defense system or dealing with our own in-fighting and squabbling, then that wasn't completely moral, was it?)

To be clear, I have a lot of respect for the kidney donation stuff, a slight amount of respect for giving money to charity, and massive disrespect for the hordes of smart people who have divested from the real world and instead smugly pat themselves on their backs that they're doing important work on AI safety.


not meaning to attack you on this point--it's your life and you can do what you want--but why would you only consider being a donor for your family? i'm guessing the reason might be 1) you have some condition and you're worried about your health (in which case you might not be an elligible donor anyway) or 2) you're worried about surgery complications or long-term health impacts. but the impacts to your health are usually much more minor than you expect!

here's an analogy i've used before: if you were walking by a burning building, and you heard a stranger inside saying "help! help!", would you run in and save them? i think a lot of people would say "yes" or at least consider it, despite running into a burning building being a lot riskier than organ donation IMO


I think that one of Effective Altruism's axioms - that all lives are equally valuable - is not accepted by much of the population.

The number of people that would rush into a building to save a family member is likely far higher than the number of people that would rush into a building to save a stranger.


>They at least have balls!

For the moment.


A lot of EAs, including the author of this piece, are also really into IQ, eugenics and "human biodiversity" (aka race science). Consider that Scott Alexander (aka squid314) once expressed a desire to donate to "Project Prevention", a eugenics charity formulated to pay undesirables to sterilize themselves [1][2].

[1] https://en.wikipedia.org/wiki/Project_Prevention [2] https://web.archive.org/web/20131230050925/http://squid314.l...


I don’t see anything wrong with the charity you mentioned. It certainly isn’t what normal people consider eugenics though technically that charity might meet its definition. If you are a crack addict, you shouldn’t be anywhere close to having children. You will ruin the child’s life first and foremost. Apart from that you will create an undue burden to society, neither of which is fair. First they should cure themselves of their addiction and then have children.


If you don't see anything wrong with a charity whose founder compares people to dogs and calls their children "litters", then I doubt there's anything I could say to make you disapprove of it.


Maybe you should answer the question I and the charity pose rather than deflecting. Do you think it’s correct for Crack Addicts to have litters of children, who will likely be neglected, emotionally stunted, and suffering from mental health problems.


That is what normal people consider eugenics.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: