Hacker News new | past | comments | ask | show | jobs | submit login

Another user posted, and deleted, a comment to the effect that the morality of experimenting with entities which toe the line of sentience is worth considering.

I'm surprised this wasn't mentioned in the "Ethics" section of the paper.

The "Ethics" section does repeatedly say "generative agents are computational entities" and should not be confused for humans. Which suggests to me the authors may believe that "computational" consciousness (whether or not these agents exhibit it) is somehow qualitatively different than "real live human" consciousness due to some je ne sais quoi and therefore not ethically problematic to experiment with.




I think about this a lot, I hope that whoever is chasing the “sentient computer dream” at least considers that it might end up an ultra depressed schizophrenic pet that wants to commit suicide but literally can’t and then wants to be murdered. No one would believe it, it would just be told it’s being silly or it’s not conscious.

I know that’s a pessimistic view but I doubt it can’t be ruled out, really, I think people working in tech are going quite mad. Frankenstein mad. Some ethics should be discussed.

An AGI turning into God is probably one of an infinite amount of outcomes, we can’t really predict what being trapped in a cluster of silicon chips would feel like.

Life itself and the drive to go on is really quite illogical, it’s unlikely intellect alone is what sustains us and makes life worth living.

There is one thing I find particular about all the AGI/ASI sentient computer discussions. I’ve rarely ever in my life heard women talk about it. Like as if this is all some manifestation of male ego. We know we’re building mirrors of ourselves and we know that is scary. This imo is why men are so captivated by ChatGPT. It really is a mirror of us. Men love men, especially super men. Ha.


unfortunately, we can barely get some groups of humans to treat other humans with dignity, no less our genetically near-by mammalian friends. i don't hold my breath something completely alien, however sub or super intelligent, will be treated with utter ignorance and disrespect.


My thoughts exactly. As we move in this direction, it's worth building the moral framework to answer the question -- if we can create consciousness, or something quite like it -- is it ethical to do so?

And on the flip side -- when we live in a world where instantiating a consciousness is cheap-or-free -- does that change how we value sentient beings generally?


I think that we’re moving into Buddhist territory. I think the opposite would happen. It would be the ego death of basically the whole world. No one would be spared from the fact that their consciousness is not special. leaders, elites everyone.

If we find out that the soul itself exists, and who knows, maybe there is actually souls, then it might not be great because people would believe they have special souls. I think this is what the Hindu class system is.


For all of @sama's discussion of AI pushing the cost of intelligence to zero, I wonder if we are pushing the _value of sentience to zero_, instead.


Despite all the dreams we are fed of immersing ourselves in AI world and creating a work-free utopia, these shiny new inventions will instead be used to increase corporate bottom lines, not humanity's overall happiness.

We are pushing the value of _people_ to zero.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: