Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the other hand, if a model fails on certain populations because not enough training data including them was input because they're historically seen as a less important subgroup, then you've simply encoded your societal biases in your model. Understanding that difference and pointing out problem spots like that is a great job for an ethical AI researcher.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: