Nope, its experience. Why do you think oculus has funny warping issues? its down to camera placement.
> A simple video matrix can warp the image to match each eye again
Occlusions are not your friend here.
> Cut out what you don't need, and just keep what's needed for the UI element.
For UI thats fix in screen space, this works, for UI thats locked to world space, you need to be much more clever about your warping. Plus your now doing realtime low latency stuff on really resource constrained devices.
> I imagine view depth could take extra work,
yes, and no. If you have a decent SLAM stack with some object tracking, you kind of get depth for free. If you have 3d gaze vectors, you can also use that to estimate depth of what you're looking at without doing anything else. (but gaze estimation thats accurate needs calibration)
> but iPhones do that now with their always-on lockscreen
Thats just a rendering thing. Its not actually looking for your face all the time. most of that is accelerometer. Plus its not like it needs to be accurate, just move more or less in time with the phone.
> Camera power is extremely cheap
Yes, but not for glasses. Glasses have about 1.3 watt-hours for the whole day. cameras consume about 30-60mw, which is about half your power budget if you want a 12 hour day
> Amazon's Ring cameras run for months on a single charge
Yes, the cameras isn't on all the time, it has PIR to work out if there is movement. Plus the battery is much much bigger. (I think it has 23 watt hours of battery)
Nope, its experience. Why do you think oculus has funny warping issues? its down to camera placement.
> A simple video matrix can warp the image to match each eye again
Occlusions are not your friend here.
> Cut out what you don't need, and just keep what's needed for the UI element.
For UI thats fix in screen space, this works, for UI thats locked to world space, you need to be much more clever about your warping. Plus your now doing realtime low latency stuff on really resource constrained devices.
> I imagine view depth could take extra work,
yes, and no. If you have a decent SLAM stack with some object tracking, you kind of get depth for free. If you have 3d gaze vectors, you can also use that to estimate depth of what you're looking at without doing anything else. (but gaze estimation thats accurate needs calibration)
> but iPhones do that now with their always-on lockscreen
Thats just a rendering thing. Its not actually looking for your face all the time. most of that is accelerometer. Plus its not like it needs to be accurate, just move more or less in time with the phone.
> Camera power is extremely cheap
Yes, but not for glasses. Glasses have about 1.3 watt-hours for the whole day. cameras consume about 30-60mw, which is about half your power budget if you want a 12 hour day
> Amazon's Ring cameras run for months on a single charge
Yes, the cameras isn't on all the time, it has PIR to work out if there is movement. Plus the battery is much much bigger. (I think it has 23 watt hours of battery)