Hacker Newsnew | past | comments | ask | show | jobs | submit | missing-acumen's commentslogin

Question to people knowing adobe lightroom, could this feature be compromised? Is this just doing API calls to some remote thing?


Lightroom has a local Heal/Remove feature, and at least with LR Classic you have to tick a box for the AI remove, which processes it on Adobe servers.

As for whether it can be compromised... Probably? It sends all or some of your photo to a remove server, so that can certainly be taken.


I mean, having the model behave this way looks too easy and I guess that adobe does qc on the features it releases, so I'm not sure to see an alternative explanation - or adobe's qc is poor/inexistent.


I'm not sure what you mean by compromised but I'm pretty sure Adobe Firefly AI features are server-based. These features are too good to be done locally.


Plus even if it could be done locally, doing it server-side has the side benefit (for Adobe) of making it trivial to prevent pirates from ever being able to use those features.


By compromised I mean something like someone having access to adobe's servers where this is running and uploading troll models or toying with the model's responses


Stretching when I wake up, doing yoga and/or swimming and/or lifting has done a lot for me.

Time spent on it varies between 2h (very lazy week) and 10h (very active week).

I feel like this helped me prevent a lot of the symptoms described both in the writeup and comments.


While it certainly does not solve everything, the work being done with verifiable VMs is very interesting.

Today's most advanced projects are able to compile pretty much arbitrary rust code into provable RISC-V programs (using SNARKs).

Imo that solves a good chunk of the problem of proving to software users that what they get is what they asked for.


There's a lot of good cryptography and game theory and economic incentive alignment that can be done to constrain and limit the trust assumptions people have to make. But ultimately, all this does is redistribute and dilute those trust assumptions. It doesn't eliminate them. There is no such thing as "trustlessness".


I do think there is. For instance, I can convince you that two graphs are not isomorphic while avoiding you the burden of having to do the computation yourself.


TIL

> … zero-knowledge succinct non-interactive argument of knowledge (zkSNARK), which is a type of zero-knowledge proof system with short proofs and fast verification times. [1]

[1] Microsoft Spartan: High-speed zkSNARKs without trusted setup https://github.com/microsoft/Spartan


> Today's most advanced projects are able to compile pretty much arbitrary rust code into provable RISC-V programs

Provable does not imply secure.


Care to expand? Happy to answer your point which is interesting but I'm unsure of the dimension you are thinking of.


I often think that people saying others make things purposefully obscure to gain/retain legitimacy fail to recognize that any long standing field, be it in science or humanities, is inherently complex, due to its long evolved jargon and set of norms.

Still, I am aware of cases where complexity has been used as a mean of power. Some languages have for instance baked in orthographic nuances and difficult grammar rules doing just that.

It would be interesting to measure the extent to which we can cut some of the complexity we find in such examples. I suspect not much, both for reasons of culture and power.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: