Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Creating regulations for hypothetical scenarios and possibilities you can invent in your head is silly.

Give me real examples of IRL harm. And most importantly give me real examples of how exactly state intervention directly solves those problems.

Otherwise this is just a philosophical debate mixed with prepper type fear of what could happen.



> Give me real examples of IRL harm.

China's social scoring?

Racial profiling in government services? https://www.amnesty.org/en/latest/news/2021/10/xenophobic-ma...

Rejecting qualified applicants? https://www.eeoc.gov/newsroom/eeoc-sues-itutorgroup-age-disc...

Racial bias skipping patients in healthcare? https://www.scientificamerican.com/article/racial-bias-found...

Recruiting ignoring women? https://www.reuters.com/article/us-amazon-com-jobs-automatio...

Wrongly issued debts? https://en.wikipedia.org/wiki/Robodebt_scheme

and so on and so forth

> And most importantly give me real examples of how exactly state intervention directly solves those problems.

Like they solve about a billion others of problems daily, and you have no issues with government interventions.

> Otherwise this is just a philosophical debate mixed with prepper type fear of what could happen.

It's only philosophical if you willingly ignore the world around you


The anxiety for regulating absolutely everything in EU is starting to be harmful.

We see mountains and mountains of regulations, some of ehich are really harmful (especially in the primary sector) and that I think are not well-intentioned.

There is a full agenda of control-everything that I do not find healthy for the average citizen.


I gave an example right at the end of my comment (but might be I have edited it, so no fret intended). Self-driving cars.

Another commenter in this thread gave more examples, which further underline what I originally meant.

Physical harm means misclassified images/persons/posts/lifes/situations/... with the classification taken as gospel, in self-proclaimed good faith. Content moderation, credit scoring, police, the whole "new" generative space. — a lot of dangerous possibilities have opened.

All of them share the commonly accepted concept of "an a AI making a decision" (in common language).

This is another level of reliance, even if gradually, from computer systems and software in general.

I am not denying that complicated liability questions exist about that too.


I assume one of the core idea would be to protect individual made by “blackbox” (complex or non complex) algorithm (e.g. used by banks and other companies) without any direct human involvement.

Basically banning companies from just saying “ Computer says no”..




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: