Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who works with a wide range of AR displays, if this is the reason for the UI change, they've fucked up hard.

Blurring in AR is quite difficult as it requires an accurately aligned image to overlay the world. The point of AR is its just an overlay, you don't need to render whats already there. To make a blur, you need the underlying image, this costs energy, which you don't really have on AR glasses.



I would actually expect AR displays to be naturally transparent. I'm not a specialist at all, but achieving a transparent screen with perfectly opaque rendered areas sounds quite unrealistic.

If the display is naturally transparent, I don't see the need for a non-opaque UI.


> If the display is naturally transparent, I don't see the need for a non-opaque UI.

You're right, but it depends on the screen type. It turns out that just being transparent isn't actually good enough, you really want to be able to dim the background as well. This means that you can overwrite the real-world object much more effectively.

but that adds a whole level of complication.


I just had a holy-shit moment reading through what you just said. I had not considered that AR overlays wont be blurred, without sampling what's behind the overlay/glass.

People are in for a world of pain when they realize this.


Unless the blur is built into the optics of the glass itself somehow!


You can only get a frosted glass effect with that.

Imagine an overlay in front of a red circle. If you want the red circle blurred in the overlay, you need to know about red circle, and sample from it for each pixel. Vision Pro cant sample the entire viewport 120fps (or whatever fps they are running at). It would be a janky mess.

Vision Pro UI is not transparent / translucent but frosted.

https://www.apple.com/newsroom/images/2024/02/apple-announce...


> Vision Pro cant sample the entire viewport 120fps

Even worse than that. Each pane of glass that's blurring needs to do its own sampling of what's behind itself. That means you have to render from back to front, sampling in 3D, for each eye, to get realistic blur effects. Your vision isn't a 2D grid coming from each eye, it's conic. A line from your retina out through two equally sized panes placed in front of each other will likely pass through two different points on each pane.

You'd probably need to implement this with ray tracing, to make it truly accurate, or at least convincing. And to make your device not slow to a crawl as you open more and more windows.


This is insightful thank you. Question: if you work with a wide range of AR displays, what do you suggest that's readily available and has a sdk?


Alas I work for a very large company who has a bunch of teams that makes prototypes for us to use.

However, a secondhand Quest 3 with unity in MR is the cheapest way to get started. It gives you SLAM/world frame of reference, which almost no other system does (apart from hololens)


AR glasses will have some sort of camera. It's easy enough to warp the captured video to roughly match the view from each eye. It doesn't have to be perfectly aligned, clear, nor high-resolution. It just needs to be sufficient to provide a faux blurred background behind UI elements.

Looking at Liquid Glass, they certainly solved it for higher-res backdrops. Low res should be simpler. It won't be as clean as Liquid Glass, but it could probably do VisionOS quality.


Oh its possible, but it costs a lot of power, and has design implications.

You need the camera on and streaming, sure you only need a portion, but also your camera needs to cover all of your screen area, and the output remapped. It also means that your camera now has limited placement opportunities.

Having your camera on costs power to, so not only is your GUI costing power but its costing more power because the camera is on as well.


I think you're over-thinking it. Camera power is extremely cheap. Amazon's Ring cameras run for months on a single charge. It's the display, refreshing content at 24–60 Hz (or more) that consumes power.

The camera will have to turn on for the glasses to show you metadata, right? The camera will see what you see, just from slightly different angles from each eye. A simple video matrix can warp the image to match each eye again. Cut out what you don't need, and just keep what's needed for the UI element. The AR glasses could simply have a dedicated chip for the matrix and other FX. I imagine view depth could take extra work, but iPhones do that now with their always-on lockscreen.


> I think you're over-thinking it

Nope, its experience. Why do you think oculus has funny warping issues? its down to camera placement.

> A simple video matrix can warp the image to match each eye again

Occlusions are not your friend here.

> Cut out what you don't need, and just keep what's needed for the UI element.

For UI thats fix in screen space, this works, for UI thats locked to world space, you need to be much more clever about your warping. Plus your now doing realtime low latency stuff on really resource constrained devices.

> I imagine view depth could take extra work,

yes, and no. If you have a decent SLAM stack with some object tracking, you kind of get depth for free. If you have 3d gaze vectors, you can also use that to estimate depth of what you're looking at without doing anything else. (but gaze estimation thats accurate needs calibration)

> but iPhones do that now with their always-on lockscreen

Thats just a rendering thing. Its not actually looking for your face all the time. most of that is accelerometer. Plus its not like it needs to be accurate, just move more or less in time with the phone.

> Camera power is extremely cheap

Yes, but not for glasses. Glasses have about 1.3 watt-hours for the whole day. cameras consume about 30-60mw, which is about half your power budget if you want a 12 hour day

> Amazon's Ring cameras run for months on a single charge

Yes, the cameras isn't on all the time, it has PIR to work out if there is movement. Plus the battery is much much bigger. (I think it has 23 watt hours of battery)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: