Before anyone does this on a work system, be aware that -- potentially even worse (in your employer's mind) than that you're providing remote arbitrary code execution to OpenAI -- is that you're definitely feeding data to OpenAI.
(Which OpenAI might not secure well enough, OpenAI might use for its own purposes, you leaking might violate contracts or regulations to which your employer is subject, etc.)
And hides a copy of itself on the machines, spreading like a virus. :-O
Current model sizes makes that difficult, but even if it just leaves a backdoor access for itself it can get scary in the future.
Totally agree, but these tools do provide real productivity boons! Full Disclosure: I am a founder of Credal.ai for just this reason, our mission is to help you get the productivity boosts of AI without trading off your infosecurity):
One thing I'm curious about is what you think of the recent OpenAI announcement about not training models on data submitted via OpenAI?
I still wouldn't trust them with sensitive info. I saw a post on Reddit that the official page was leaking users' question histories (and there's reddit posts this morning about histories being wiped, perhaps to deal with this issue?) https://www.reddit.com/r/ChatGPT/comments/11l2hox/chatgpt_ju...
What about backups? They only keep backups for 30 days? They don't backup this data? Is the legal concept of data retention the same as the legal concept of data storage?
(Which OpenAI might not secure well enough, OpenAI might use for its own purposes, you leaking might violate contracts or regulations to which your employer is subject, etc.)