Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True. Even then, I'm still holding my breath for Unity's VR API to support Google Cardboard. Anyone know when that's coming?



Nope. I mean this: http://docs.unity3d.com/Manual/VROverview.html

The whole point is to use Unity's single high level device independent VR SDK (which enables many internal rendering optimizations), and to avoid plugging in yet another VR SDK in addition to a bunch of other separate GearVR, Oculus, Sony, etc SDKs, and having to write complex branching code (and build configuration scripts) in my app to support multiple APIs with my own ad-hoc layers of abstraction (which is costly to maintain and soon to become obsolete). That's Unity's job, and why I gladly paid them for a license.

Unity has a device independent VR API that supports Oculus, GearVR, Sony and other devices, but as far as I know it does not yet currently support Google's Cardboard SDK. They've promised to support it, but they haven't delivered it yet, nor said when to expect it.

Unless Google wants to pay me for wasting my time, the last thing in the world I want to do is to spend a lot of time and effort duct taping Google's Cardboard SDK onto the side of my existing app that already uses Unity's built-in VR SDK, and then Unity finally releases their the built-in support for Google Cardboard that they've been promising.

The potential and promise is there, but until I can just tick the "Virtual Reality Supported" checkbox in the Unity player settings to transparently support Google Cardboard, GearVR, Oculus, Sony, Vive, etc, the rubber hasn't hit the road.

Until then, you have to do short term dead end hacks like this, which will probably break every time anything it depends on releases a new version (which is often): https://github.com/ludo6577/VrMultiplatform


I'm waiting on that as well. But things are so early on and some of the SDKs Unity has to integrate haven't even hit v1., so it's a bit hard to support.

Also, even if they do automate handling the rendering and input aspects of each VR SDK, you will still have to deal with the various different platform / store level aspects. Google Play versus Oculus Home versus Steam versus whatever else comes in. It's already a struggle maintaining separate Rift/GearVR builds in Unity, when I have to maintain 4 different environments and their platforms? Yeah...


It already works on Gear VR (Samsung's phone-based VR thing), so we know it's feasible for it to run on phones that way.


What I mean is being able to check "Virtual Reality Supported" and have it support Google Cardboard just as well as it supports Oculus, GearVR, using Unity's built-in device independent VR API.


Availability of head tracking, movement tracking around a space ("room scale VR"), and input mechanisms varies enormously between these. Room scale VR on a Vive with Steam VR controllers is just not the same thing as google cardboard.


I agree, each brand of device is vastly different, especially when it comes to input. There's a common core that needs to be handled at a low level under the hood, but in an extensible manner. Unity has already announced their intention of tackling this terribly difficult problem, but now we're just waiting for them to deliver on their promises.

They're the only ones in the position to really solve the problem at the right level, since it interacts so deeply with their rendering and input system, which is all under the hood so not possible for external developers modify, hook into, or fix problems.

Unity needs to integrate the input system with their new UGUI user interface toolkit, and integrate it with VR devices so all the different kinds of input devices plug in and are handled consistently. So they need to get very friendly and cooperative with all the different VR and input device hardware manufacturers they support, and do whatever it takes including flattering, shaming or partying them into cooperating if they foolishly decide they'd rather lock their developers into writing code specifically for their device, so that it doesn't work with other manufacturer's devices on purpose. (Here's looking at you, Apple!)

Unity has announced a new input system [1] and published a prototype [2], and asked for developers to give them feedback [3], but it's way too early to use in a product.

I'm optimistic that they mean what they say, and glad they're publishing the prototype and asking for feedback from developers, but I appreciate it's a difficult problem that will take a while to get right.

I've read over the documentation, and it seems to take a nice modular approach that can abstract away many differences between input devices, but it's just a hard problem by its very nature, and there will always be special circumstances for particular input devices that you need to handle on a case-by-case basis. So both the VR API and the input API should support hooks and customizations and plug-ins for special purpose hardware, and ways of querying capabilities (like position tracking), enumerating devices, reflecting on and hooking into the model.

I'm disappointed that neither the new UGUI user interface system nor the new input system prototype seem to fully support multitouch input and gestures in a nice way, like the TouchScript library does [4], but I hope they eventually support that as well.

One frustrating example of a bad API that both the Oculus and Unity SDKs have is the "recenter" call [5], which resets some hidden state inside the tracking code so the current direction of your head is "forward". You can't read and write the current yaw nulling offset [6], you can just "recenter", whatever that means. So there's no way to smoothly animate to recenter -- it always jerks your head around. They should expose that hidden state, and let me read and write it, so I can recenter smoothly. It's never a good idea to have a bunch of magic and hidden state behind an API like that. Plus the documentation is terrible -- they don't define what any of the terms mean, what the model is, or what the actual effect is mathematically. (Does it reset the yaw around the neck, resulting in a discontinuous jump in eye position? How can I tell what those measurements are, or know what the model really is? Does recenter work differently with devices that track the actual absolute position of your head, as opposed to estimating it from a model of the eye position relative to the neck?)

The input system should also support other kinds of gesture recognition like motion tracking (staring, tilting, pecking, shaking and nodding your head), popular "quantified self" input devices like the ShakeWeight with its optional heart rate monitor [7], network and virtual device adaptation, emulating and debugging hardware devices in the editor, etc. But right now it's so early in the game that the new input system prototype isn't yet integrated with the new VR API or the (less) new UGUI toolkit.

Oculus has examples of how to integrate head pointed ray casting with UGUI using reticles and cursors, but that code is brittle and dependent on their particular SDK and app-specific utilities, and it copies and modifies a bunch of the Unity UGUI input tracking code instead of subclassing and hooking into it, because Unity didn't expose the public virtual methods and hooks that they needed. They've beseeched Unity to fix that by making their API more public and hookier, but for now, Oculus's example UGUI integration code is pretty hacky, complex, and not a long term solution.

All the input tracking stuff will work much better when Unity fixes it under the hood (and lets developers open up the hood and hot-rod the fuck out of it, or shop for competing solutions on the asset store), instead of having 6 different VR and multitouch SDKs that all solve it in 6 different but overlapping ways, which you can't integrate together.

[1] http://blogs.unity3d.com/2016/04/12/developing-the-new-input...

[2] https://sites.google.com/a/unity3d.com/unity-input-advisory-...

[3] http://forum.unity3d.com/threads/welcome-new-input-system-re...

[4] http://touchscript.github.io/

[5] http://docs.unity3d.com/ScriptReference/VR.InputTracking.Rec...

[6] see "nulling problem" -- this is a GREAT paper: http://www.billbuxton.com/lexical.html

[7] https://www.youtube.com/watch?v=JImGs2Xysxs


I highly recommend that anyone working with user interface input systems should read this classic paper by Bill Buxton:

Lexical and Pragmatic Considerations of Input Structures

http://www.billbuxton.com/lexical.html

PRAGMATICS & DEVICE INDEPENDENCE

"From the application programmer's perspective, this is a valuable feature. However, for the purposes of specifying systems from the user's point of view, these abstractions are of very limited benefit. As Baecker (1980b) has pointed out, the effectiveness of a particular user interface is often due to the use of a particular device, and that effectiveness will be lost if that device were replaced by some other of the same logical class. For example, we have a system (Fedorkow, Buxton & Smith, 1978) whose interface depends on the simultaneous manipulation of four joysticks. Now in spite of tablets and joysticks both being "locator" devices, it is clear that they are not interchangeable in this situation. We cannot simultaneously manipulate four tablets. Thus, for the full potential of device independence to be realized, such pragmatic considerations must be incorporated into our overall specification model so that appropriate equivalencies can be determined in a methodological way. (That is, in specifying a generic device, we must also include the required pragmatic attributes. But to do so, we must develop a taxonomy of such attributes, just as we have developed a taxonomy of virtual devices.)"

Also check out Proton, which is a brilliant regular expression based multi-touch gesture tracking system, which would work very nicely for VR and multi-device applications. Proton is to traditional ad-hoc gesture tracking as Relax/NG is to XML Schema.

http://vis.berkeley.edu/papers/proton/


Check out InstantVR [1] as one input integration solution. I'm experimenting with it for switching between Kinect, Leap, and Hydra for hand/arm support. A completely native solution would be much better, but this is a decent stop gap until we get that.

[1] https://www.assetstore.unity3d.com/en/#!/content/23009


That looks quite useful! Thanks for the recommendation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: