What is the NYT's stance here? Is it pure spite? I guess their lawyers told them this is the winning move, and perhaps it is. But it just seems so blatantly wrong.
If you look at Reddit's r/ChatGPT, you'll quickly notice that the median use of ChatGPT is for therapy.
Is the NYT really ok with combing through people's therapy logs?
I find it interesting you are blaming the NYT on this and not ChatGPT for keeping these logs in the first place. If openAI didn't keep logs, then there would be nothing to search, and a more harmful actor couldn't accomplish something far more nefarious. Saying that there could be confidential information in the logs, so that means we shouldn't access it, should also mean the logs shouldn't be kept.
What I mean is there are two meanings to "expectation of privacy": the Bayesian prior, and the legal stance. I have an expectation of privacy in my home but I still close the shades.
If you look at Reddit's r/ChatGPT, you'll quickly notice that the median use of ChatGPT is for therapy.
Is the NYT really ok with combing through people's therapy logs?