What does that mean for LLMs? The Internet has lots of poorly written text. If LLMs can't distinguish nuance, ambiguity, or lack of clarity then what exactly are they generating and why would their output be useful?
Taking a poorly written sentence, interpreting it as meaning something incorrect, and then presenting it with authoritative, confident language is very close to gas lighting.
Taking a poorly written sentence, interpreting it as meaning something incorrect, and then presenting it with authoritative, confident language is very close to gas lighting.