I completely agree. There's an "obsequious verbosity" to these things, like they're trying to convince you they they're not bullshitting. But that seems like a tuning issue (you can obviously get an LLM to emit prose in any style you want), and my guess is that this result has been extensively A/B tested to be more comforting or something.
One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.
One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.