Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you can run AI in airplane mode, you are not trusting any third party, at least until you reconnect to the Internet. Even if the model was malware, it wouldn’t be able to exfiltrate any data prior to reconnecting.

You’re trusting the third party at training time, to build the model. But you’re not trusting it at inference time (or at least, you don’t have to, since you can airgap inference).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: