A lot of Asimov's work deals with loopholes and inadequacies in these laws, though: for instance, what is a human being? Does surgery violate the first law? If a human is going to injure another human, can the first law be overridden? If a robot believes that some entity is not human, do they have the responsibility to check before injuring them?
I mean, realistically, AI safety is not going to be achieved with a list of written laws. Heck, the way it's going, we wouldn't know how to enforce such a list at all: first we would need the AI to understand human languages, and then we would need the AI to care, that is to say, robots would need to know and follow the second law before you can tell them about it.
http://en.wikipedia.org/wiki/Three_Laws_of_Robotics