> It shouldn't do that, and we are taking steps to avoid reciting training data in the output
This just gives me a flashback to copying homework in school, “make sure you change some of the words around so it’s not obvious”
I’m sure you’re right Re: jurisprudence, but it never sat right with me that AI engineers get to produce these big, impressive models but the people who created the training data will never be compensated, let alone asked. So I posted my face on Flickr, how should I know I’m consenting to benefit someone’s killer robot facial recognition?
The whole point of that case begins with the admission "yes of course Google copied." They copied the API. The argument was that copying an API to enable interoperability was fair use. It went to the Supreme Court because no law explicitly said that was fair use and no previous case had settled the point definitively. And the reason Google could be confident they copied only the API is because they made sure the humans who did it understood both the difference and the importance of the difference between API and implementation. I don't think there is a credible argument that any AI existing today can make such a distinction.
This just gives me a flashback to copying homework in school, “make sure you change some of the words around so it’s not obvious”
I’m sure you’re right Re: jurisprudence, but it never sat right with me that AI engineers get to produce these big, impressive models but the people who created the training data will never be compensated, let alone asked. So I posted my face on Flickr, how should I know I’m consenting to benefit someone’s killer robot facial recognition?