Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This looks awesome! And I'd really like to try it out.

2 security thoughts that I couldn't find answers to:

1. how does the input fed into OpenAI codex filter out malicious code pollution? or even benign but incorrect code pollution (relevant research on stackoverflow for example - https://stackoverflow.blog/2019/11/26/copying-code-from-stac...)

2. In an enterprise setting, what does the feedback loop look like? How do you secure internal code that is being fed back to the model? Does it use some localized model? HE, etc?



#2 is the big one for me. I'm hesitant to install this on a work machine where our code could be sent elsewhere.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: