Everyone knows OpenAI is guilty here but they need to prove it. That's what the logs are for. The huge online astroturfing on this subject clearly illustrates how well OpenAI has automated this part of their defense (astroturfing, public outcry).
However, I find it unlikely the OpenAI hasn't already built filters to prevent their output from appearing to be regurgitated NYTimes content.
However, I find it unlikely the OpenAI hasn't already built filters to prevent their output from appearing to be regurgitated NYTimes content.