2 security thoughts that I couldn't find answers to:
1. how does the input fed into OpenAI codex filter out malicious code pollution? or even benign but incorrect code pollution (relevant research on stackoverflow for example - https://stackoverflow.blog/2019/11/26/copying-code-from-stac...)
2. In an enterprise setting, what does the feedback loop look like? How do you secure internal code that is being fed back to the model? Does it use some localized model? HE, etc?
2 security thoughts that I couldn't find answers to:
1. how does the input fed into OpenAI codex filter out malicious code pollution? or even benign but incorrect code pollution (relevant research on stackoverflow for example - https://stackoverflow.blog/2019/11/26/copying-code-from-stac...)
2. In an enterprise setting, what does the feedback loop look like? How do you secure internal code that is being fed back to the model? Does it use some localized model? HE, etc?