Interesting! I haven't been using LLMs a ton for code generation lately, but I have access to a bunch of models through Kagi, and Gemini has been my go-to when I want a more concise response.
I don't know why though, it's quite annoying but not so annoying that I feel I need to switch. Given that I'm just following a uni course which requires code to not be read again - if not by colleagues in my group - I leave the safety slop and put the burden of skipping 70% of the code on the shoulders of my colleagues which will read my code.
Then they put my code into chatgpt or whatever they use and ask it to adapt to their code
After a while we (almost) all realized that was just doing a huge clusterfuck
BTW, I think it would have been much better to start from scratch with their own implementation given we're analyzing different datasets. And it might not make sense to try to convert the code for a dataset structure to another. A colleague didn't manage to draw a heatmap with my code and a simple csv for God know what reasons. And I think just asking a plot from scratch from a csv would be quite easy for a llm