Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You believe sanitized, parameterized queries are safe, right? This works the same way. The AIs job is to select the query, which is a simple classification task. What gets executed is hard coded by you, modulo the sanitized arguments.

And don't forget to set the permissions.



Sure, but then the parameters of those queries are still dynamic and chosen by the LLM.

So, you have to choose between making useful queries available (like writing queries) and safety.

Basically, by the time you go from just mitigating prompt injections to eliminating them, you've likely also eliminated 90% of the novel use of an LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: