Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I gave an example right at the end of my comment (but might be I have edited it, so no fret intended). Self-driving cars.

Another commenter in this thread gave more examples, which further underline what I originally meant.

Physical harm means misclassified images/persons/posts/lifes/situations/... with the classification taken as gospel, in self-proclaimed good faith. Content moderation, credit scoring, police, the whole "new" generative space. — a lot of dangerous possibilities have opened.

All of them share the commonly accepted concept of "an a AI making a decision" (in common language).

This is another level of reliance, even if gradually, from computer systems and software in general.

I am not denying that complicated liability questions exist about that too.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: