With enough resolution and appropriately chosen transformation steps, it is equivalent. E.g., the diffusion could focus on one region and then later focus on another, and it's allowed to undo the effort it did in one region. Nothing architecturally prohibits that solution style from emerging.
The choice of transformation steps to facilitate this specific diffuse approach seems like a non-trivial problem. It doesn't follow such an organic solution would emerge at all, now, does it?
The pattern ", now, " is indicative of a sort of patronization I don't normally engage with, but, yes, you're correct.
In some measure of agreeing with you: For other classes of models we know for a fact that there exist problems which can be solved by those architectures and which can't be trained using current techniques. It doesn't feel like a huge stretch that such training-resistent data might exist for diffusion models.
That said, I still see three problems. Notably, the current ancestral chain of inquiry seems to care about the model and not the training process, so the point is moot. Secondarily, in other similar domains (like soft circuits) those organic solutions do seem to emerge, suggesting (but not proving) that the training process _is_ up to par. Lastly, in other related domains, when such a solution doesn't emerge it ordinarily happens because some simpler methodology achieves better results, meaning that even with individual data points suggesting that diffusion solutions don't model that sort of linearity you still need to work a little bit to prove that such an observation actually matters.