I agree with you on the idea of prompts and seeds leaving much to be desired. So that's why I think more sophisticated steering is necessary.
The models' latent space is extremely powerful, but you get hamstrung into the text encoders whims when you do things through a prompt interface. In particular, you've hit exactly an issue I have with current LLMs in general in that they are locked into wors and concepts that others have defined (labelings of points in the latent space).
Wishy washy thinking: I'd be nice if there were some sort of Turing complete lambda calculus sort of way to prompt these models instead. Where you can define new terms, create expressions, and loops and recursion or something.
It would sort of be like how SVGs are "intent complete" and undeniably art, but instead of vector graphics, it is an SVG like model prompt.
The models' latent space is extremely powerful, but you get hamstrung into the text encoders whims when you do things through a prompt interface. In particular, you've hit exactly an issue I have with current LLMs in general in that they are locked into wors and concepts that others have defined (labelings of points in the latent space).
Wishy washy thinking: I'd be nice if there were some sort of Turing complete lambda calculus sort of way to prompt these models instead. Where you can define new terms, create expressions, and loops and recursion or something.
It would sort of be like how SVGs are "intent complete" and undeniably art, but instead of vector graphics, it is an SVG like model prompt.