And just like optimizing compilers LLMs also emit code that is difficult to verify and no-one really understands, so when the shit hits the fan you have no idea what's going on.
Is it though? Most code that LLM emits are easier to understand than equivalent code by humans in my experience, helped by overt amount of comment added at every single step.
That's not to say the output is correct, there are usually bugs and unnecessary stuff if the logic generated isn't trivial, but reading it isn't the biggest hurdle.
I think you are referring to the situation where people just don't read the code generated at all.. in that case it's not really LLM's fault.