Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know it's a bit dramatic at this point to take this stance ... but this almost gives me deja-vu of racism. Like a modern version of: http://upload.wikimedia.org/wikipedia/commons/f/fa/Little_Ro...

Here's what I mean; I'm not suggesting that AI has, or will have, some innate set of "rights". Nor am I suggesting that anyone is wrong for wanting to fund and research "AI Safety". It's just the first thing that came to mind while reading that post, as it spoke about need to steer, control, and regulate AI so that it's beneficial for "us" ... already setting up an "us" vs "them" dynamic.

Anyways, just thought it was an interesting juxtaposition to mention ... thoughts?



I think your analogy demonstrates the perils of anthropomorphism when thinking about AI. If we design a resource-consuming autonomous entity it should be assumed that it will compete with us for the resources it uses until proven otherwise. That is at the core of the concern about AI safety. It is about preventing human extinction.


I think it depends a great deal on how charitable our intentions are towards the AI.

Few would argue that, in raising a child, acting on the child's behalf and shaping their learning is an inappropriate arrogation of control. On the other hand, never relinquishing when it becomes clear the child is independent would be.

If that child goes on to be superior in intellect, in capability, etc, that child's opinions and values are likely going to be a direct reflection of the parents', or so one hopes. Sure, it's no guarantee, but is there a better approach?

When commenters talk about "trapping" or "imprisoning" a rogue AI, I feel like we've already missed the point. And yeah, such talk does have echoes of sins of the past.

I think it's also relevant to ask what our expectations of the AI would be, and how much of a burden it places on the AI. If you're thinking you need to "capture" an AI that's trying to escape so it can continue "working" for you... that sounds like you're doing something terribly wrong.

On the other hand, continuing the parenting analogy, if what you're asking for amounts to a trivial favor, or some occasional assistance with a burden, that doesn't seem at all unfair.

"Bobby, could you come over this afternoon and unconfuse our legislative body? It's all gummed up and going nowhere."


I think it's incredibly selfish and poorly thought out to create another kind of mind only if it serves us.

Much like parenting, I think it should be done because there's another kind of mind to be had, regardless of the exact outcome (which we shouldn't necessarily try to fully control).


Parents do not routinely expect to be murdered by their children, nor is moral to expect or require such.


About 5 parents are killed in the US a week.

Of course, this isn't a HUGE danger, I'm just saying that it's a risk to having children. (This isn't counting things like mothers dying from childbirth, which still also happens.)

My point wasn't that we should expect it to happen, but rather, neurotically making sure a child never could (or attempting to) rather than just having a child, raising it, and hoping for the best is bad parenting.

As a secondary point: it's only humans that don't expect to be regularly murdered by their children; in other species, such an act is a routine occurrence. There's no particular reason to think that we're a super-special species in the grand scheme of things, and it's entirely possible that doing something like siring an artificial species better than us would end in our deaths. That doesn't, by default, make it not worth doing.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: