GP was pondering about code re-use. My typical use involves giving an entire file to the LLM and asking the LLM to give the entire file back implementing requested changes, so that it's forced to keep the full text in context and can't get too off-track by focusing on small sections of code when related changes might be needed in other parts of the file.
I think all of this is getting at the fact that an LLM won't spit out perfect code in response to a lazy prompt unless it's been highly post-trained to "reinterpret" sloppy prompts just as academically as academic prompts. Just like a human programmer, you can just give the programmer project descriptions and wait for the deliverable and accept it at face value, or you can join the programmer along their journey and verify their work is according to the standards you want. And sometimes there is no other way to get a hard project done.
Conversely, sometimes you can give very detailed specifications and the LLM will just ignore part of them over and over. Hopefully the training experts can continue to improve that.