I agree, companies nor computers for themselves can't be moral, and instead need to rely on rules that model morals, but in order for them to interact with the world they need humans that allow them to interact with the world, and that must be held accountable for the morality of their systems.
This is why the AI world were, "sorry, computers says no" pretending to cut people off the loop is kind of scary, as people want to pretend this indirection frees them from accountability.
It'll be important for the law to reflect this, but ultimately just being lawfully right, but morally wrong can get you shot in the back anyways.
This is why the AI world were, "sorry, computers says no" pretending to cut people off the loop is kind of scary, as people want to pretend this indirection frees them from accountability.
It'll be important for the law to reflect this, but ultimately just being lawfully right, but morally wrong can get you shot in the back anyways.