Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, if you have the private keys you can publish the result to whomever you want. But you don't need and wouldn't benefit from FHE in any way in this case.



You would benefit from FHE: the users would know that data never leaves the device, the inference is done locally, and only the result is shared.

I mean, I do not have a link to a paper with a system like that, but I think a combination of FHE and enclave of sorts can be good for such purpose (leaving aside potential performance issues with FHE).


If the data is encrypted with my key, no one else can access it or do anything else with it. Period - there is nothing more to talk about (assuming that the encryption scheme is secure, of course). No one can extract anything from this data unless they have my private key.

FHE, formally, is simply a scheme that has the following formal property:

  Program(Encrypted(data, key)) 
   = Encrypted(Program(data), key) 
FHE allows me to securely use someone else's hardware to run my inference on my data and be confident that I am the only one who knows the result. If the data is on my hardware, and I don't want it to leave my hardware, then FHE is completely useless for me.

What you actually want is something like trusted computing. The government decides what analysis to run, it sends it to my hardware, my hardware runs that analysis on my decrypted data, and sends the result to the government, in such a way that the government can be certain that the algorithm was followed exactly. Of course, you need some assurances even here, such that the government doesn't just ask for the plaintext data itself - there have to be some limits to what they can run.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: