I quite often prompt with code in a different language, or pseudo-code describing roughly what I am trying to achieve, or a Python function signature without the function body.
Or I will paste in a bunch of code I have already written with a comment somewhere that says "TODO: retrieve the information from the GitHub API" and have the model finish it for me.
> a Python function signature without the function body.
This, and for multiple functions that end up composing well together as per their signatures. Maybe there's one public function I want to document well, so I write the docstring myself, and it's the result for 3-4 other internal functions which I'd let the LLM implement.
The nice part is that even if the LLM fails, all that is not lost, as opposed to some weird spec I'd feed an LLM but that's too verbose for a human reader, or a series of prompts.
I am pretty sure we will see programming languages that are custom made for AI popping up soon. I cannot predict how it will look like, but it may be a mix between an extremely well documented language with lots of safeguards (like Kotlin or Java) combined with natural language like instructions.
No. Thanks for pointing this out. It took quite some time to understand and express this to my management. Also I use a very small set of libraries, with function names / parameters that are quite explicit. And I don't need to specify some style to bypass hidden prompts, and bad coding practices.
I think in code.
To me, having to translate the into natural language for the LLM to translate it back into code makes very little sense.
Am I alone in this camp? What am I missing?