Dunno, about Gemini, but I experienced this with o1 - it created a non-recursive algo, for a recursive problem.
When I asked it about it, it doubled down on being right. When I pointed out the flaw with a specific example, it was like 'If you wanted to have it work with recursive cases, you should've said so, dumbass'.
So my conclusion is that these new LLMs are not more sure they're right, they're just simply right more of the time and are trained with a more assertive personality. (Also step on me, LLM daddy)
But in truth,not necessarily in practical things like coding but more ethereal things like analysis, it is very convincing. More so than a human, in explanations of why that's it's answer is the case, even if it is wrong. If you're looking for an excuse better than my dog ate it, ask a SOTA LLM.