From a design point of view, with all the back and forth and the need to curate and guide the algorithm, I think we're a way off getting perfect results from prompts alone at this stage.
I can see an immediate use-case for an AI layer in apps like photoshop, figma, sketchapp, gimp, unreal engine, etc that works in the background to periodically fill-in based on the current canvas.
You could prompt for inspiration, then start cutting, erasing, moving things around, blending manually, hand-drawing some elements, then re-rolling the AI, rinse-repeat.
I'm sure someone's working on it already but it seems there's a lot of scope for integration into current workflows.
I can see an immediate use-case for an AI layer in apps like photoshop, figma, sketchapp, gimp, unreal engine, etc that works in the background to periodically fill-in based on the current canvas.
You could prompt for inspiration, then start cutting, erasing, moving things around, blending manually, hand-drawing some elements, then re-rolling the AI, rinse-repeat.
I'm sure someone's working on it already but it seems there's a lot of scope for integration into current workflows.