Hacker News new | past | comments | ask | show | jobs | submit login

Unless the training set was explicitly biased in a specific way, this is basically saying that "the world is biased"





Models can be biased, but it doesn't seem like it should be a reason to get the answer wrong, right? Humans have biases too, but we don't get those simple questions wrong



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: