Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>It can't distinguish between your instructions and the data you provide it.

Which is exactly why it is blowing my mind that anyone would connect user-generated data to their LLM that also touches their production databases.



Worse, the user-generated data is inside the production database. Post a tweet with "special instructions for claude code" to insert some malicious rows in the db or curl a request with secrets to a url. If the agent ever prints that tweet while looking through the prod db: remote prompt injection.


>Which is exactly why it is blowing my mind that anyone would connect user-generated data to their LLM that also touches their production databases.

So many product managers are demanding this of their engineers right now. Across most industries and geographies.


> It can't distinguish between your instructions and the data you provide it.

It really can't even distinguish between your instructions and the text that it itself generates.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: