I mean, having the model behave this way looks too easy and I guess that adobe does qc on the features it releases, so I'm not sure to see an alternative explanation - or adobe's qc is poor/inexistent.
I'm not sure what you mean by compromised but I'm pretty sure Adobe Firefly AI features are server-based. These features are too good to be done locally.
Plus even if it could be done locally, doing it server-side has the side benefit (for Adobe) of making it trivial to prevent pirates from ever being able to use those features.
By compromised I mean something like someone having access to adobe's servers where this is running and uploading troll models or toying with the model's responses
There's a lot of good cryptography and game theory and economic incentive alignment that can be done to constrain and limit the trust assumptions people have to make. But ultimately, all this does is redistribute and dilute those trust assumptions. It doesn't eliminate them. There is no such thing as "trustlessness".
I do think there is. For instance, I can convince you that two graphs are not isomorphic while avoiding you the burden of having to do the computation yourself.
> … zero-knowledge succinct non-interactive argument of knowledge (zkSNARK), which is a type of zero-knowledge proof system with short proofs and fast verification times. [1]
I often think that people saying others make things purposefully obscure to gain/retain legitimacy fail to recognize that any long standing field, be it in science or humanities, is inherently complex, due to its long evolved jargon and set of norms.
Still, I am aware of cases where complexity has been used as a mean of power. Some languages have for instance baked in
orthographic nuances and difficult grammar rules doing just that.
It would be interesting to measure the extent to which we can cut some of the complexity we find in such examples. I suspect not much, both for reasons of culture and power.