Hacker News new | past | comments | ask | show | jobs | submit login

> But the internet seems a very biased sample.

It also seems to me (acknowledging my lack of expertise) that LLMs trained from online resources are likely to weight text that is frequent vs text that represents "truth". Or perhaps I should say repetition should not be considered evidence of truth. I have no idea how to drive LLM models or other ML models to incorporate truth -- humans have a hard time agreeing on this and ML researchers providing guided reinforcement learning don't have any special ability to discern truth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: