> You can get it to mistake « afraid » between fear and sorry-to-say scenario...
Again, this is user error and therefore a training issue. If one were to provide appropriate context in the prompt elucidating the distinction (as I would do in any meatspace conversation so as to be sufficiently clear) then I suspect you will see the desired output.
In another article (or the comments to another article?), someone gave an example of the classic Monty Hall problem, but explicitly mentioned that all the doors are transparent (I think they even added "this is important" after this bit of info), and ChatGPT still went on providing the explanation for the better-known version. Absurd? Maybe - but garbage? I don't think so. Ok, some humans might also overlook the distinction, but having a computer behave like a superficial customer support agent who doesn't bother to read your email and just replies with the standard answer that they think fits best? Who needs that?
I bet this can be resolved in the future. The problem is that some things appear very, very often in similar forms in the training data, this leads to the model memorising the answer instead to generalising. This phenomenon can be seen for other tasks in other models, e.g. vision.
Again, this is user error and therefore a training issue. If one were to provide appropriate context in the prompt elucidating the distinction (as I would do in any meatspace conversation so as to be sufficiently clear) then I suspect you will see the desired output.
It's simple GIGO.