I still routinely use Nvidia 3D vision to build protein structures into electron density maps. Nvidia hasn't updated the driver in generations, and no new compatible displays or parts have been made in eons, but it's an invaluable tool. My concern is that Wayland is going to kill it entirely. There is definitely a market for this.
I'm using a Dell S2716DG which is a 27" 1440p TN panel at 120 Hz. As far as I'm aware, there were only ever a few 1440p models that got 3D Vision (officially). I've tried running 3D Vision on an OLED ultrawide I have, and it works, but only on the bottom of the screen. I assume something to do with the refresh rate (144 Hz) or pixel response time (which I think isn't great with OLED).
I'm driving it with an A4500, on Linux (openSUSE mainly), 3-pin to the USB emitter, with glasses. The A4500 is somewhat gimped because only the 470.xx driver works. With the newer drivers X11 detects the display and emitter but displays both frames simultaneously. I think it might have something to do with the stereo declaration in the xorg.conf file being different with the newer drivers, but I'm still chasing down that lead.
LOL, what? There are indeed 3D displays with eye tracking built in that do this. I was just at Display Week expo last week and saw a handful of new models, and they've been around for years.
The real use case is eye catching advertising, distracting passers by and stealing attention.
Price is getting more competitive though with VR and 3D TV, once you include the cost of the PC needed to drive them. Might start seeing them in a few specialist places like your more exclusive dental surgery. Is 32" big enough for architects to sell designs to customers?
How long is that actually going to work? At some point the novelty will wear off and people will just walk past it, adding just another noise generator to our urban environment.
This reminds me of the fad of placing beacons everywhere in shopping centers that sent out "helpful" (annoying) notifications to anyone's phone who walked by.
It’s lenticular but with 100 different possible angles, so there’s no eye tracking needed and it works with multiple viewers. The tradeoff is it seems you need to pump a lot of data in for all those views, and you probably need a pretty high resolution and brightness screen. There’s a good description in their docs:
I've got the Portrait and it's pretty good. Definitely not "this is ACTUALLY 3D!" but certainly in the "huh, that's got some depth" zone. It's much more impressive if you have "real" depth maps (ie you're rendering the content, using stereoscopic cameras, LIDAR, iPhone Portraits etc.) - most of the stuff on mine is ML'd depth maps from old photos.
I have been working on a portable camera array to put real video on these displays. The video demos are either projected depth maps or very expensive, stationary, indoor camera rigs. Despite trying to stay away from AI generation, making the array sparse, then synthesizing missing angles and dropped frames actually solved a lot of problems better than more hardware.
When something improves by an order of magnitude, prior descriptors may be misleading. (wealthy people take advantage of this oversight all the time in my experience).
When anything changes by an order of magnitude, you have to reassess all your assumptions for application spaces. This can have big impacts on how you decide to invest.
I don't get it. What does that mean? A state of the art processor now is just the same bucket of transistors invented in 1954. But they are incomparable in complexity and functionality.
It’s all just carefully arranged sand used to send photons at our eyeballs, maybe with some bits of metal to spice things up. A laptop is really the same thing as a stained glass window from hundreds of years ago.
I strongly disagree. The "thing" is not in the parts, nor even in the whole collection of the parts, BUT in how the parts connect with each other.
I often think about it like I have a rectangular area of 8 x 8 bricks on placed on their smaller side next to each other so each of them takes up a square area on the surface.
Then I want to add something to my collection of bricks, I want to add information to it.
So what do I add to it? I add nothing. Instead I take away some of those tiles. After that the missing pieces together form up some letter, say K, or any other. Now I have the information "K" which perhaps according to our agreed upon code means "OK".
So I have added information to that square of tiles, by taking away some of them. I think that is marvelous. To say that "The Whole is bigger than its Parts" is catchy but not all there is to it. It is more apt to say "The Whole is the Collection of its Parts AND their Connections".
You do realize China and India make up a good chunk of the world population and are undoubtedly not white, right? Almost 50% in Central and Eastern Asia on its own.
It’s not like there isn’t a Wikipedia article you could peruse, but basic logic should get you there on its own.
> the people from East Asia, which consists of China, Japan, Mongolia, North Korea, South Korea, and Taiwan.
Seriously? At best, you can maybe if you squint consider Mongols to be white. But the vast majority of the population in East Asia alone is going to be coming from China so it’s a moot point.
For all intents and purposes, East Asia is considered to be people of color.
The concept "People of Color" is annoyingly and absurdly racist.
Chinese people are different from all the different black people from Africa (yeah, they are not all the same, and there is a lot of different ethnic groups with vastly different phenotypes in sub-saharan Africa), that are different from semites, that are different from slavs, that are different from Ashkenazi jews, that are different from north African berberes, that are different from Bedouhins, and yet you fucking liberals decided to put us all in the same bucket: "People of Color".
We don't fucking care about your sick and stupid infantile politics. Stop imposing your craziness onto the rest of the world.