> I don't understand why the dominant approach is an actual, realistic chat interface where you can only add a new response, or in best case create "threads".
I'm not 100% sure either, I think it might just be a first-iteration UX that is generally useful, but not specifically useful for use cases like coding.
To kind of work around this, I generally keep my prompts as .md files on disk, treat them like templates where I have variables like $SRC that gets replaced with the actual code when I "compile" them. So I write a prompt, paste it into ChatGPT, notice something is wrong, edit my template on disk then paste it into a new conversation. Iterate until it works. I ended up putting the CLI I use for this here, in case others wanna try the same approach: https://github.com/victorb/prompta
I'm not 100% sure either, I think it might just be a first-iteration UX that is generally useful, but not specifically useful for use cases like coding.
To kind of work around this, I generally keep my prompts as .md files on disk, treat them like templates where I have variables like $SRC that gets replaced with the actual code when I "compile" them. So I write a prompt, paste it into ChatGPT, notice something is wrong, edit my template on disk then paste it into a new conversation. Iterate until it works. I ended up putting the CLI I use for this here, in case others wanna try the same approach: https://github.com/victorb/prompta