I gave an example right at the end of my comment (but might be I have edited it, so no fret intended). Self-driving cars.
Another commenter in this thread gave more examples, which further underline what I originally meant.
Physical harm means misclassified images/persons/posts/lifes/situations/... with the classification taken as gospel, in self-proclaimed good faith. Content moderation, credit scoring, police, the whole "new" generative space. — a lot of dangerous possibilities have opened.
All of them share the commonly accepted concept of "an a AI making a decision" (in common language).
This is another level of reliance, even if gradually, from computer systems and software in general.
I am not denying that complicated liability questions exist about that too.
Another commenter in this thread gave more examples, which further underline what I originally meant.
Physical harm means misclassified images/persons/posts/lifes/situations/... with the classification taken as gospel, in self-proclaimed good faith. Content moderation, credit scoring, police, the whole "new" generative space. — a lot of dangerous possibilities have opened.
All of them share the commonly accepted concept of "an a AI making a decision" (in common language).
This is another level of reliance, even if gradually, from computer systems and software in general.
I am not denying that complicated liability questions exist about that too.