Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But this makes sense since humans are biased towards i.e. picking first option from the list. If LLM was trained on this data it makes sense for this model to be also biased like humans that produced this training data


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: