Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if it's possible to audit AI for discriminatory practices. Of course this would require a completely different set of legal processes (needing access to the algorithm, needing access to enough data to prove a bias) that could take decades to implement in a way that covers even a slim majority of applicable, obvious cases like race, sex, pregnancy status, religion, nation of origin, veteran status, and age.

Decades of the bar being just that much higher for the disadvantaged demographic. Potentially, a generation of disenfranchisement, which we know has generational effects. (The USA arguably still hasn't 'recovered' from segregation's economic effects on the populace)



This would require it to be generally possible to audit AI at all, which would be a nice problem to solve.


DARPA has this same problem; most people aren't just willing to "trust the machine", and it's hard to know where it went wrong, how to improve it, or who messed up without it. See XAI: https://www.darpa.mil/program/explainable-artificial-intelli...


All the EEOC would need to do is to prove that there is disparate impact[1] based solely on the hiring outcomes of companies using AI to hire. This is what they already do.

[1] https://en.wikipedia.org/wiki/Disparate_impact


Given that AI are often not explanatory (see my earlier comment in this subthread), access to the algorithm may not be strictly necessary. Though the ability to black-box test it against a wide range of possible inputs might be a good thing to aim for.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: