Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

EDIT:

I took a quick look at the codebase for the browser (https://github.com/MozillaReality/FirefoxReality), and it isn't what I was expecting (once you dive to app/src) - definitions for the main three supported systems are each at this root level, and I didn't see any kind of "template" or "empty" definition file or anything to guide someone wanting to make one themselves - but maybe that's not possible.

It appears that these files are wrappers around the APIs used for those other products (?). So your only hope would be to base your wrapper on these existing ones, which while feasible (working with the API docs too), isn't what I would call "ideal". Plus, there doesn't seem to be a separation of concerns when it comes to input vs output.

I mean, what if I wanted to use my hacked powerglove with my Rift HMD? Or what if I wanted to use my Vive controllers with my Virtuality Pro?

I understand that the supported systems are nominally "all-in-one" solutions, but the parts (HMDs, controllers, tracking) are available separately (at least in some of the cases) - so mixing and matching might be something someone would want to do. It would definitely be that case for those implementing their own "mixed setup", where they might have an old VPL datasuit and glove, plus some other old pro-level HMD that they like to play with.

As it is, it seems like each setup would have to be a custom "all-in-one" for that setup, so if you had two different systems that used two different HMDs and tracking, but both used the same Powerglove Minelli interface - the code couldn't be "shared" in a reasonable manner (yes, I know with the right symlinking and other buggery it could be accomplished within the current structure, but it isn't ideal).

I was hoping the structure would be more like /plugins, with /input /output, and under those /controller /glove /wand and /hmd /motionplatform /cave, then under each /occulus /vive /google /example

Or something like that. Am I wrong here?

---

Ok - I know this is likely open source and thus can be changed, but I honestly wish that things like this offered (and marketed) a means - via plugins or something (and maybe this does - hence, marketed) - the ability to expand on what i/o devices are allowed.

Sure - should the "main players" be featured? Of course. But what about those of us who might like to - oh, I don't know - throw on an old Forte VFX-1 and a Powerglove and go at it? Maybe we want to recreate the experience of using REND386?

Or what if you have access to a Polhemus or Ascension magnetic tracking system (or any number of other "pro-level" tracking)? Or HMDs? Or other input devices? Or motion platforms?

Maybe you're a developer of something completely new, and want to play in this same environment and make it compatible...

Again - I haven't dug into the code, and maybe it's designed to accommodate these kinds of use cases. I'd just love to see that availability, if it exists, to be advertised more. I guess its because I get the same feeling around software released for my OS of choice, where Windows and Mac are prominently advertised, and Linux is at best, if it's offered at all, the classic step-child stuffed in a closet.

As an aside, I never can figure out the argument that "we don't offer support for Linux because the market is so small, it isn't worth it for us" - well, if everyone keeps saying and doing that, what do you think will ultimately occur? Do you think that market will grow, stay the same, or shrink?



I think your design taste is sound. But there are some constraints.

The VR/AR vendor market has largely collapsed into a few big players with vertical-integration ecosystem-capture dreams, so high-investment opaque monoliths is what's available. Valve's OpenVR apis being the biggest multi-vendor exception. And they're immature, in a complex and still-evolving design space, making it harder to abstract commonality. And gaming VR imposes strict performance constraints, so some paths may have little slack with which to pay some abstraction costs. So having vendor-specific monoliths, joined only at the top, could be the right choice. Any failure to have their javascript-level joining be flexibly mixable... seems regrettably traditional for browser apis, and I don't know the cultural background for that.

I've mixed Vive lighthouse and custom optical tracking, Vive and WMR and now drone HMDs, on linux. I took a custom browser-as-compositor stack. (But there's also the Vrui VR Toolkit.)


> Forte VFX-1

Man I remember seeing those in copies of PC Gamer as a kid. You can find modern reviews online. They had a lot of issues; mostly due to low resolution leading to motion sickness.

There were attempts at 3D web standards in the past. Who remembers VRML?!


VRML contained the technology to crash both Netscape Navigator and your entire X display, frequently under a minute.


> You can find modern reviews online.

Which I don't think do them real justice; it's like people growing up with modern cars doing a review of a Model T - a fair review you have to try to replicate everything around what is being reviewed, or at least have that mindset. A good review would use an old Pentium running Duke Nukem 3D with the patches meant for the VFX-1 (for the head tracking and "puck" usage).

> They had a lot of issues; mostly due to low resolution leading to motion sickness.

Most consumer HMDs of the time had low resolution, low field of view, contrast issues, etc. If they had any kind of tracking, coupled with the PC hardware and software of the era, the frame lag to changes would also cause simulator sickness issues.

One thing though - if you look at research papers from the era - there was discussion of what was termed a "looking past the pixels" effect.

That is - if you didn't focus in on the pixels themselves, the screen-door effects (terrible at those low resolutions, especially if the FOV was made large - which in most cases, it wasn't), and the small FOV - that psychologically (maybe neurologically too) some people would report seeing things at a better resolution and FOV.

It wasn't investigated too much - but what was thought was that for those participants, the mind would fill in the "gaps" using it's mental model - so long as they weren't focusing on those gaps. There are many other experiments that have been done that proves the mind does this in the "real world" - so in theory, that effect could be real. I suspect it is, but only if the imagery being displayed in the HMD matches some imagery the mind has seen before. For instance, my memory of playing Duke Nukem 3D in a VFX-1 setup seems far better than I am sure it was, but then, I had played the game a lot on a regular monitor at a higher resolution, so I had a "model" of it in my mind. It might be a different experience today, since I haven't played it in a long while - or if I played another game in that HMD that I hadn't played before.

> There were attempts at 3D web standards in the past. Who remembers VRML?!

Mark Pesce was behind that - among other pioneers:

https://en.wikipedia.org/wiki/Mark_Pesce

These earlier attempts have led to and informed newer frameworks being played with today. VRML had its good and bad points; the main concept was to make VR as accessible to normal people as the Web was via HTML. But it really was a technology that was too early for the hardware that people had available (unless they had access to some high-end SGI machinery). That kinda gave it an unfair reputation, and was a part of the reason it never caught on well.

Today we have a range of other options, some more "standard" than others, but most things "VR" still seem to be cobbled together from "scratch", or at best layered on top of WebGL - while things like Three.js have made WebGL easier for people to use, it still isn't anything like typing in simple VRML.

Another note: One thing I liked about the VFX-1 that I haven't yet seen in current HMDs was the "flip up" design. It was meant so you could easily "enter and exit" a virtual world as you designed it. Today, it may not really be needed as much as resolutions and FOV increase in HMDs - instead, we're slowing seeing the concept of "recreate the monitor inside the HMD" kind of approach. But I still thought the concept was interesting, and I was kinda surprised that Oculus didn't do something similar, since Palmer Luckey hacked the VFX-1 to death on the 3DMTBS forums prior to the kickstarter.


I guess there's a limit on time you can edit your post...

Other comments I have read here indicate that this software is for "standalone" systems - and not, as I presumed, for "component" systems (I'm not sure what else to term them).

Which would make my concerns (mostly) irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: