Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would you say this compares to human error? Let's say instead of the LLM there's a human that can be fooled into running an unsafe query and returning data. Is there anything fundamentally different there, that makes it less of a problem?


You can train the human not to fall for this, and discipline, demote or even fire them if they make that mistake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: