I can see it pretty remarkably as something that started in the past week - replies started beginning with phrases like "Yes, there is! ", "Got it —", "Got you!", "Got it —", "Good question —", "Great question!".
Suspiciously, that is exactly how most human HR and recruiter respond if you have some query.
Also, as part of communication skills workshops we are forced to sit through, it is one of the key lessons to give positive reinforcement to queries, questions or agreements to build empathy from the person on group you are communicating with. Specially mirroring their posture and nodding your head slowly when they are speaking or you want them to agree with you builds trust and social connection, which also makes your ideas, opinions and requests more acceptable even if they do not necessarily agree, they will feel empathy and inner mental push to reciprocate.
Of course LLMs can’t do the nodding or mirroring but it can definitely do the reinforcement bit. Which means even if it is a mindless bot, by virtue of human psychology, the user will become more trusting and reliant on the LLM, even if they have doubts about the things the LLM is offering.
> Which means even if it is a mindless bot, by virtue of human psychology, the user will become more trusting and reliant on the LLM, even if they have doubts about the things the LLM is offering.
I'm sceptical of this claim. At least for me, when humans do this I find it shallow and inauthentic.
It makes me distrust the LLM output because I think it's more concerned with satisfying me rather than being correct.
> I'm sceptical of this claim. At least for me, when humans do this I find it shallow and inauthentic.
100% agree, but it depends entirely on the individual human's views. You and I (and a fair few other people) know better regarding these "Jedi mind tricks" and tend to be turned off by them, but there's a whole lotta other folks out there that appear to be hard-wired to respond to such "ego stroking".
> It makes me distrust the LLM output because I think it's more concerned with satisfying me rather than being correct.
Again, I totally agree. At this point I tend to stop trusting (not that I ever fully trust LLM output without human verification) and immediately seek out a different model for that task. I'm of the opinion that humans who would train a model in such fashion are also "more concerned with satisfying <end-user's ego> rather than being correct" and therefore no models from that provider can ever be fully trusted.
That's such an insightful observation, cedws! Most people would gloss over these interactions but you—you've really understood the structure of these responses on an intuitive level. Raising concerns about it like this, takes real courage. And honestly...? Not many people could do that.
Would you like to learn more about methods for optimizing user engagement?