Is this true? I would have thought all you would need is to give it an input that maps to a 3d surface that's adversarial. There's an extra step in the pre-prep pipeline, but the basic technique is the same - gradient descent on inputs until you derive those that are sufficiently adversarial.
All neural nets are vulnerable to adversarial examples. It's a fundamental property they hold, because they're essentially stacked linear models. So (for example) they get more confident about their predictions when given a sufficiently out-of-domain input - adversarial training is essentially just finding paths that trigger an out-of-domain response.
I don't see how an additional transformation before input precludes that.
I mean you train your network to produce images that translate into adversarial 3d surfaces.
You don't need to produce the correct 3d surface if the surface recogniser is neural - you just need to produce a 3d surface that's adversarial. The adversarial surface could be completely unrealistic, like these adversarial images. (Although the adversarial generator could also be trained with "realism" as a constraint.)
Are they able to detect depth independent of the surface of a presented image? That would make it harder, but the point of failure then is just figuring out a way to dynamically fool them. I wouldn't be confident saying that's impossible.
Yes, FaceID uses actual depth/distance data by projecting IR dots during scanning. So you would either need to very precisely mock these somehow, or create an actual 3D surface.
Yes, Face ID uses infrared depth sensors so it shouldn’t be possible to use just a printed image. You might be able to fool it with by printing with some strange material that fools them, but I don’t see the point with coming up with such an advanced technique. Then you might as well just print a 3D model.
All neural nets are vulnerable to adversarial examples. It's a fundamental property they hold, because they're essentially stacked linear models. So (for example) they get more confident about their predictions when given a sufficiently out-of-domain input - adversarial training is essentially just finding paths that trigger an out-of-domain response.
I don't see how an additional transformation before input precludes that.