The anxiety for regulating absolutely everything in EU is starting to be harmful.
We see mountains and mountains of regulations, some of ehich are really harmful (especially in the primary sector) and that I think are not well-intentioned.
There is a full agenda of control-everything that I do not find healthy for the average citizen.
I gave an example right at the end of my comment (but might be I have edited it, so no fret intended). Self-driving cars.
Another commenter in this thread gave more examples, which further underline what I originally meant.
Physical harm means misclassified images/persons/posts/lifes/situations/... with the classification taken as gospel, in self-proclaimed good faith. Content moderation, credit scoring, police, the whole "new" generative space. — a lot of dangerous possibilities have opened.
All of them share the commonly accepted concept of "an a AI making a decision" (in common language).
This is another level of reliance, even if gradually, from computer systems and software in general.
I am not denying that complicated liability questions exist about that too.
I assume one of the core idea would be to protect individual made by “blackbox” (complex or non complex) algorithm (e.g. used by banks and other companies) without any direct human involvement.
Basically banning companies from just saying “ Computer says no”..
Give me real examples of IRL harm. And most importantly give me real examples of how exactly state intervention directly solves those problems.
Otherwise this is just a philosophical debate mixed with prepper type fear of what could happen.