Hacker News new | past | comments | ask | show | jobs | submit login

Everything you say is true, but in context the important point is that the sort of biases ML models pick up are overwhelmingly related to training insufficiencies and are often incredibly difficult to spot unless you already know they exist. For a practical example, see the recent Twitter image cropping oddities (https://twitter.com/bascule/status/1307440596668182528).

The idea (as quoted) that models are routinely picking up biases directly from researchers is complete nonsense.




Right, that's some kind of human interest journalist fudging and it's not true. But bias/surprising wrong answers in ML is obviously a real problem and fixing the data is not always the right answer. You might not be able to tell what's wrong with the data, or where you could get any more of it, and you might be reusing a model for a new problem and not have the capability to retrain it.


We should only use models where they work well. Like in architecture, we should only build what will be safe for use.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: