Hacker News new | past | comments | ask | show | jobs | submit login

That makes no sense. I would be more worried about it failing like a self driving car would. A malfunctioning robot could set your house on fire through an electrical short even if it is 100% trustworthy and harmless to humans.

Once I am scared of humans wanting to kill me why would I care if a robot with human like intelligence wants to kill me? It's the same thing at that point.




It pushes the ultimate decision higher up the chain of command, for one.

If a general orders a platoon of soldiers to commit a war crime, the soldiers still ultimately have to decide whether to pull the trigger. There's conscience and self-preservation at play. The robots don't have second thoughts.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: