It may be because I've a writer/English major personality, and so am very sensitive to the mood and tone of language, but I've never had trouble distinguishing LLM output from humans.
I'm not suggesting anything so arrogant as that I cannot be fooled by someone intentionally deploying an LLM with that aim; if they're trained on human input, they can mimic human output, I'm sure. I just mean that the formulations that come out of the mainstream public LLM providers' models, guided however they are by their pretraining and system prompts, are pretty unmistakably robotic, at least in every incarnation I've seen. I suppose I don't know what I don't know, i.e. I can't rule out that I've unknowingly interacted with LLMs without realising it.
In the technical communities in which I move, there are quite a few forums and mailing lists where low-skilled newbies and non-native English speakers frequently try to disgorge LLM slop. Some do it very blatantly, others must believe they're being quite sly and subtle, but even in the latter case, it's absolutely unmistakable to me.
I'm not suggesting anything so arrogant as that I cannot be fooled by someone intentionally deploying an LLM with that aim; if they're trained on human input, they can mimic human output, I'm sure. I just mean that the formulations that come out of the mainstream public LLM providers' models, guided however they are by their pretraining and system prompts, are pretty unmistakably robotic, at least in every incarnation I've seen. I suppose I don't know what I don't know, i.e. I can't rule out that I've unknowingly interacted with LLMs without realising it.
In the technical communities in which I move, there are quite a few forums and mailing lists where low-skilled newbies and non-native English speakers frequently try to disgorge LLM slop. Some do it very blatantly, others must believe they're being quite sly and subtle, but even in the latter case, it's absolutely unmistakable to me.