Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You mostly see people projecting perceived error onto LLMs?

I don't think I've seen a single article about an AI getting things wrong, recently, where there was a nuanced notion about whether it was actually wrong.

I don't think we're anywhere close to "nuanced mistakes are the main problem" yet.



I mostly see people ignoring successes and playing up every error.


But the errors are fundamental, and the successes actually subjective as a result.

That is, it appears to get things right, really a lot, but the conclusions people draw about why it gets things right are undermined by the nature of the errors.

Like, it must have a world model, it must understand the meaning of... etc.; the nature of the errors they are downplaying fundamentally undermines the certainty of these projections.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: