Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For what it’s worth, the article seems to be about the difficulty of modeling human patterns of ambiguity in expression, not an LLM’s ability to understand or interpret or use ambiguity.

It’s an important distinction in my view. We aren’t talking about whether the model “knows” or “understands” the difference between different usage of ambiguous terms. We are talking about how consistently the model predicts intelligible word fragments likely to follow a prompt when the prompt includes word fragments that correspond with language humans often use ambiguously.

In other words, do we, the humans, understand our own ambiguous expression well enough to model it accurately enough to then interpret the model’s output according to our own understanding.

Paper seems to conclude, not quite yet.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: