Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

we already have The Three Laws of Robotics

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics



A lot of Asimov's work deals with loopholes and inadequacies in these laws, though: for instance, what is a human being? Does surgery violate the first law? If a human is going to injure another human, can the first law be overridden? If a robot believes that some entity is not human, do they have the responsibility to check before injuring them?

I mean, realistically, AI safety is not going to be achieved with a list of written laws. Heck, the way it's going, we wouldn't know how to enforce such a list at all: first we would need the AI to understand human languages, and then we would need the AI to care, that is to say, robots would need to know and follow the second law before you can tell them about it.


You're probably joking, and my humour unit is impaired today. But in case you're not...

Those laws are the requirements document - the $10mm is for the implementation.


If you can define "harm" well enough for the first law to be useful then you've solved the problem and laws 2 and 3 are probably superfluous.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: