Yeah, it’s disheartening that people often think my writing (most of it predates gpt3) is llm, and some of my favourite writers also fall under this wet blanket. LLMs just copy the most common writing style, so now if you write in a common way you are “llm”.
I’ve also had my writing misidentified as being LLM-produced on multiple occasions in the last month. Personally, I don’t really care if some writing is generated by AI if the contents contain solid arguments and reasoning, but when you haven’t used generative AI in the production of something it’s a weird claim to respond to.
Before GPT3 existed, I often received positive feedback about my writing and now it’s quite the opposite.
I’m not sure whether these accusations of AI generation are from genuine belief (and overconfidence) or some bizarre ploy for standing/internet points. Usually these claims of detecting AI generation get bolstered by others who also claim to be more observant than the average person. You can know they’re wrong in cases where you wrote something yourself but it’s not really provable.
I've read a _lot_ of deep learning papers, and this is extremely atypical. I agree with you that if there were any sort of serious implications then it'd be important to establish proof, but in the case of griping on a forum I think the standard of evidence is much lower.
If you have proof like the logits are statistically significant for LLM output, that would be appreciated - otherwise it's just arguing over style.