the debate is about how valuable the abstraction here is to warrant a library, and the fact that it predefines the prompt and api call flow, so you cannot prompt engineer or use something like CoT/ToT
except it's not neat or novel, this idea has been around and implemented for many months now, by many people, using many methods. Running a tool on the output and then feeding that back to the LLM, also not novel and a widely used technique
> We'd love to know if TypeChat is something that's useful and interests you!
People can debate till the cows come home. But it's worth remembering that hacker news is about stimulating intellectual curiosity.
There's no reason for this to have a fixed flow, either - it's got a hint of diagonalizability to it - by which I mean, you can get the model to build a schema for dynamic flows, given a 'bootstrapping' schema. No different than what has always had to happen for someone to write a compiler for a programming language in the language itself.