Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Which works perfectly

... on conformant inputs, when it has no bugs.



On non-conformant inputs, a parser will barf and yell at you, which is exactly what you want.

On non-conformant inputs, there's absolutely no telling what an LLM will do, which is precisely the problem. It might barf, or it might blissfully continue, and even if the input was right you couldn't remotely trust it to regurgitate the input verbatim.

As for bugs, it is at least theoretically possible to write a parser with no bugs, whereas an LLM is fundamentally probabilistic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: