Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean you train your network to produce images that translate into adversarial 3d surfaces.

You don't need to produce the correct 3d surface if the surface recogniser is neural - you just need to produce a 3d surface that's adversarial. The adversarial surface could be completely unrealistic, like these adversarial images. (Although the adversarial generator could also be trained with "realism" as a constraint.)

Are they able to detect depth independent of the surface of a presented image? That would make it harder, but the point of failure then is just figuring out a way to dynamically fool them. I wouldn't be confident saying that's impossible.



Yes, FaceID uses actual depth/distance data by projecting IR dots during scanning. So you would either need to very precisely mock these somehow, or create an actual 3D surface.

https://support.apple.com/en-us/HT208108


Yes, Face ID uses infrared depth sensors so it shouldn’t be possible to use just a printed image. You might be able to fool it with by printing with some strange material that fools them, but I don’t see the point with coming up with such an advanced technique. Then you might as well just print a 3D model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: