I think there's a widely held misconception that anything you paste into GPT-4 will be used as raw training data by the model.
Some people even seem to believe that it's learning continuously, so something you paste in could show up in an answer for another user a few minutes later.
My mental model of how this works is somewhat different:
- It takes months to train a model on raw data, and OpenAI train new ones (that get released to the public) quite infrequently.
- OpenAI DO NOT WANT your private data in their training data. They put a great deal of work into stripping out PII from the training data that they do use already (this is described in their papers). They're not going to just paste in anything that anyone typed into that box.
Here's the problem though: they DO use ChatGPT interactions to "improve" their services. I don't think that means piping the data directly into training, but they clearly log everything and use those interactions as part of subsequent rounds for things like fine-tuning and RLHF.
Also they had that embarrassing bug a few weeks ago where some users could see the titles of conversations had by other users.
So it's not irrational to worry about pasting data into GPT-4 - it gets logged, and it could leak by accident.
But I'm confident that data passed to ChatGPT isn't being piped in as raw training data for subsequent versions of their live models.
(I hope I'm right about this though - I thought about blogging it, but OpenAI's transparency isn't good enough that I'd feel comfortable staking my reputation on this)
I'm absolutely horrified by people's willingness to submit private information (personal or corporate) even if it's not used for training. Data breaches happen all the time (targeted or accidental), and OpenAI is becoming a juicier target by the day.
You're right that OpenAI doesn't want the information. Consequently, OpenAI will not have security policies and processes geared for anonymization, or handling financial and health data as those are not a design goals. If I were an attacker, I'd go for the raw data rather than try to glean information off the model (in the hypothetical where user input were to be used for training)
Usually BAAs are required for IT vendors from healthcare companies before they start getting paid. It doesn't mean that they are claiming that their systems are HIPAA compliant
Someone might ask it:
"How do you I figure out if this person killed someone?"
and it responds:
"I can't be certain if they killed them but last week they asked me where they should hide the body."
But seriously, I think best argument for this is that the EU(or other euro nations) would not hesitate to go after a US company for collecting user data in violation of their data privacy laws. Even in the US, certain professionals are required to maintain confidentiality of certain records or face rather extreme penalties. OpenAI also doesn't have FAANG capital to grease Washington with yet and we know how kleptocrats love to leverage justice against newly emergent companies with valuable IP.
So if they say they don't, they had better not be or it would the likely be the end of them.
> last week they asked me where they should hide the body.
ChatGPT is a static model and has zero memory. It can't even "remember" anything word-to-word as it generates its output! It starts its processing over from scratch for each word.
I think it's more about being against the principle of piping potentially sensitive data to any third party.
True, OpenAI doesn't have any real motivation to randomly pluck your data and decide to do something horrible to you with it... but they could. More importantly, circumstances can and will change as time goes on. If your logs change hands as part of a buyout or cyberattack, you'll have no recourse.
> OpenAI doesn’t have any real motivation to randomly pluck your data and decide to do something horrible to you with it
They do have a motivation to use it for training, which could result in it being externally exposed to third parties, who might, OTOH, have the motivation when encountering it to do something horrible to you with it.
Yes re treating interactions as RLHF. Could imagine them developing a flow to automatically catalog interactions as successful and unsuccessful, then cluster those by domain + interaction flow. If someone has a successful interaction in a cluster that is normally unsuccessful, treat that as a 'wild-type' prompt engineering innovation that needs to be domesticated into the model.
I think you're right that blindly training on chats would bring back the olden days of google bombing ('santorum')
And also that any company with 'improve' in their TOS isn't committing to perfect privacy
> OpenAI DO NOT WANT your private data in their training data
But they do want it. I can see many old chat logs.
Data is a liability. Does "clear conversations" in chat.openai.com actually remove them? Or jst mark them as "deleted", but they remain in a database. I just did a data export, then a clear conversation, then another data export. The second export was empty, which seems suspiciously fast to me
I'm genuinely trying to understand (based on this and another comment above): wouldn't storing data for pre-training vs fine-tuning carry the same risks?
If you mean the risk that OpenAI will have their own security hole that leaks that stored data then yes.
If you mean the risk that someone will ask a question about your company and ChatGPT will answer with some corporate secrets then no.
This all depends very much on what they are using the ChatGPT data for. My theory is that they treat it very carefully to avoid "facts" from it being absorbed into the model - so even "fine tuning" may be inaccurate terminology here.
I really, really wish they would be more transparent about how they use this data.
> But I'm confident that data passed to ChatGPT isn't being piped in as raw training data for subsequent versions of their live models.
right now sure but they are almost certainly saving that data to send you targeted ads down the line. maybe not this company... maybe when they get into financial hardship and sell off to someone with dubious ethics. maybe not ads but something like that.
Some people even seem to believe that it's learning continuously, so something you paste in could show up in an answer for another user a few minutes later.
My mental model of how this works is somewhat different:
- It takes months to train a model on raw data, and OpenAI train new ones (that get released to the public) quite infrequently.
- OpenAI DO NOT WANT your private data in their training data. They put a great deal of work into stripping out PII from the training data that they do use already (this is described in their papers). They're not going to just paste in anything that anyone typed into that box.
Here's the problem though: they DO use ChatGPT interactions to "improve" their services. I don't think that means piping the data directly into training, but they clearly log everything and use those interactions as part of subsequent rounds for things like fine-tuning and RLHF.
Also they had that embarrassing bug a few weeks ago where some users could see the titles of conversations had by other users.
So it's not irrational to worry about pasting data into GPT-4 - it gets logged, and it could leak by accident.
But I'm confident that data passed to ChatGPT isn't being piped in as raw training data for subsequent versions of their live models.
(I hope I'm right about this though - I thought about blogging it, but OpenAI's transparency isn't good enough that I'd feel comfortable staking my reputation on this)