Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using LLMs to filter requests to LLMs is a flawed strategy because the filtering LLM can itself be tricked by a specially crafted prompt injection. Here's an example of that from 2022: https://simonwillison.net/2022/Sep/12/prompt-injection/#more...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: