From what i have seen, people using AI somehow get the mindset that the AI generated result is "godlike" and the "ultimate truth". Its really, really scary and im not that hopeful for what we will see int he next decade.
Once i told a coworker that a piece if his code looked rather funky (without doing a more deep CR), and he told me its "proven correct by AI". I was stunned, and asked him if he knows how LLMs generate their responses? He was genuinely in the belief that it was in fact "artificial intelligence" and was some sort of "all knowing entity".
It just drives me nuts when I see people say things like "yeah I asked ChatGPT about this extremely famous open problem, wish me luck!" Like what do you expect to happen exactly with an engine that can't even consistently keep track of what you wrote ten thousand tokens ago?
Once i told a coworker that a piece if his code looked rather funky (without doing a more deep CR), and he told me its "proven correct by AI". I was stunned, and asked him if he knows how LLMs generate their responses? He was genuinely in the belief that it was in fact "artificial intelligence" and was some sort of "all knowing entity".