elzbardico is pointing out how the author is having the confidence value generated in the output of the response rather than it being the confidence of the output.
this trick is being used by many apps (including Github copilot reviews). The way I see it, is that if the agent has an eager-to-please problem, then you give it a way out