Just yesterday I was thinking to myself "I wonder how long it will be until we start generating prompts using code and then we're just writing code again"
There probably is a cool mix of both that is better than either one separately. I'm thinking something like
func = llm('function that sorts two input argument lists')
where llm calls openai or a local llm (cached for later use). This way you don't lose the benefits of a coding interface (e.g. make all code with openai and the maintainability mess that can come with that). And you get readability through the prompt etc etc. (I mean this project is sort of in that direction already.) It's basically like writing code with a framework that is 'filled out' automatically by a coworker.
Worth being creative with ideas like this at least.
> It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system.
Introducing non-deterministic inputs into programs is already wild.
I think once we get past "make the LLM generate #$%!#@ JSON" we will start seeing a lot more of this, since we will then be able to constrain the code-paths that are followed.
I could absolutely see LLM-powered operators being introduced at some point, that we use just like an if statement.
Imagine "if(input like LLM(greeting))" or while(LLM(input) like "interested")
essentially distilling unstructured inputs into a canonical form to be used in control flows:
Switch(LLM(input)):
- case request
- case question
- case query