Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are certain pieces of text that appear right before some of the greatest pieces of code ever written. For example, we've all heard of NASA code requirements. If you get the LLM into the "mindset" of a top-tier professional developer before getting it to spit out code, the code quality will reflect that. If your prompt is sloppy and poorly defined, you'll get copy-pasted StackOverflow code, since that's how most SO questions look. If it's stupid but it works, it's not stupid.

The hard part is that finding a local optimum for prompting style for one LLM may or may not transfer to another depending on personality post-training.

And whatever style works best with all LLMs must be approaching some kind of optimum for using English to design and specify computer programs. We cannot have better programs without better program specifications.



Can you share some examples of these certain pieces of text and greatest pieces of code?


Well, if you want safety-critical code, you could have the LLM read this before asking it to write its own code: https://ieeexplore.ieee.org/document/1642624

GP was pondering about code re-use. My typical use involves giving an entire file to the LLM and asking the LLM to give the entire file back implementing requested changes, so that it's forced to keep the full text in context and can't get too off-track by focusing on small sections of code when related changes might be needed in other parts of the file.

I think all of this is getting at the fact that an LLM won't spit out perfect code in response to a lazy prompt unless it's been highly post-trained to "reinterpret" sloppy prompts just as academically as academic prompts. Just like a human programmer, you can just give the programmer project descriptions and wait for the deliverable and accept it at face value, or you can join the programmer along their journey and verify their work is according to the standards you want. And sometimes there is no other way to get a hard project done.

Conversely, sometimes you can give very detailed specifications and the LLM will just ignore part of them over and over. Hopefully the training experts can continue to improve that.


Horrible, terrifying advice. If you want safety critical code, DON'T USE AN LLM AT ALL. Use formal verification methods!

This is a solved problem!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: