Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For an LLM to read a screen, it has to be provided the screen as part of its prompt, and it will be vulnerable to prompt injections if any part of that screen contains untrusted data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: