Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We're less than 2 months away from the first real releases of VR equipment and already the ecosystem is fractured in to at least 4 different platforms/SDKs/styles: Oculus, SteamVR/OpenVR/Vive, Google Daydream, Playstation VR.

I fear the fracturing will make total adoption lower and slower, as developers will have to choose sides or spend way more time developing for all platforms.



Yes, a single ecosystem would help adoption, but it would also significantly reduce innovation. It's so early in the lifecycle that we're better off emphasizing innovation.

Anybody buying now has to recognize that anything they're buying will be obsolete in a year or two. Once you realize that, you realize it doesn't matter too much if you pick the 'wrong' one. You'll be replacing your headset with v2 or v3 of the winner anyways.


"It's so early in the lifecycle that we're better off emphasizing innovation."

Early in the lifecycle, when there is a lot of R&D overhead and risk, is where companies actually have incentive to collaborate. It's very unlikely that the ecosystem will "open" later, instead we'll have to (as you said) accept the ecosystem of whoever wins.


And later in the lifecycle there are more vertical integrations that companies monetize, incentivizing collaboration to get as many users as possible.

It's all a part of the race to the bottom, from cloud data services to internet browsers.


It's really three levels:

High End - Desktop ( Vive / Oculus ) $600-800

Console ( PSVR ) $400-500

Mobile ( Google / Oculus / ?? ) $99-??

Console vs Desktop vs Mobile -- that's the real issue. The leap from Mobile VR to Desktop VR is huge. PSVR -- remains to be seen what Sony does on the hardware front (PS4.5 with new ATI card, or ??).


It's a good point, and I don't really see an issue.

After all, to this day gaming is distributed already between console/desktop/mobile, with each audience being large enough to ensure good profitability in every environment.


There's also high-end mobile (Samsung Gear). 300 bucks? 400? Don't know exactly.


Gear VR is only an additional $99. You have to have one of the select few phones it works with, but I don't think it's any more fair to count that in the cost of the setup than it is to count the PC in the cost of the desktop VR system, or the house in the cost of the roomscale VR system.


My bad! I went by the old Samsung Note-based price range. Had no idea they lowered the price that much.


They also gave it away to a lot of S7 buyers. So most people have it for 0$.


It's the price of the shell, you still need the phone to put in it.


Consumer VR is in it's infancy. Embryonic almost. In the early days of desktop computer, how many different OSes and hardware configurations were there? In the years of the automobile before the Model T, how many wildly different types of car were available to the public?

One platform to rule them all will come in time, but we should be happy that for now the ecosystem is competitive, it means the consumer gets a voice in deciding what shape the tech will take


Embryonic VR was the Virtual iGlasses tethered to a Pentium tower PC. Worked surprisingly well. Alas, the $600 price tag and similarly embryonic graphics tech was beyond the interest of most developers. Still have mine...


And several more contenders as well that don't get talked about as much.

One major problem is that none of them are open, and none of them have pushed to create standards; they all want to unilaterally define themselves as the standard. I think we'll get a more unified ecosystem when someone starts looking to collaborate on standards rather than exclusivity. Perhaps Google will do that with Daydream, or perhaps someone else will.


The VR Ecosystem looks a lot like the console ecosystem, which has dealt with the problem by adopting SDKs like Unity or Unreal.


True. Even then, I'm still holding my breath for Unity's VR API to support Google Cardboard. Anyone know when that's coming?



Nope. I mean this: http://docs.unity3d.com/Manual/VROverview.html

The whole point is to use Unity's single high level device independent VR SDK (which enables many internal rendering optimizations), and to avoid plugging in yet another VR SDK in addition to a bunch of other separate GearVR, Oculus, Sony, etc SDKs, and having to write complex branching code (and build configuration scripts) in my app to support multiple APIs with my own ad-hoc layers of abstraction (which is costly to maintain and soon to become obsolete). That's Unity's job, and why I gladly paid them for a license.

Unity has a device independent VR API that supports Oculus, GearVR, Sony and other devices, but as far as I know it does not yet currently support Google's Cardboard SDK. They've promised to support it, but they haven't delivered it yet, nor said when to expect it.

Unless Google wants to pay me for wasting my time, the last thing in the world I want to do is to spend a lot of time and effort duct taping Google's Cardboard SDK onto the side of my existing app that already uses Unity's built-in VR SDK, and then Unity finally releases their the built-in support for Google Cardboard that they've been promising.

The potential and promise is there, but until I can just tick the "Virtual Reality Supported" checkbox in the Unity player settings to transparently support Google Cardboard, GearVR, Oculus, Sony, Vive, etc, the rubber hasn't hit the road.

Until then, you have to do short term dead end hacks like this, which will probably break every time anything it depends on releases a new version (which is often): https://github.com/ludo6577/VrMultiplatform


I'm waiting on that as well. But things are so early on and some of the SDKs Unity has to integrate haven't even hit v1., so it's a bit hard to support.

Also, even if they do automate handling the rendering and input aspects of each VR SDK, you will still have to deal with the various different platform / store level aspects. Google Play versus Oculus Home versus Steam versus whatever else comes in. It's already a struggle maintaining separate Rift/GearVR builds in Unity, when I have to maintain 4 different environments and their platforms? Yeah...


It already works on Gear VR (Samsung's phone-based VR thing), so we know it's feasible for it to run on phones that way.


What I mean is being able to check "Virtual Reality Supported" and have it support Google Cardboard just as well as it supports Oculus, GearVR, using Unity's built-in device independent VR API.


Availability of head tracking, movement tracking around a space ("room scale VR"), and input mechanisms varies enormously between these. Room scale VR on a Vive with Steam VR controllers is just not the same thing as google cardboard.


I agree, each brand of device is vastly different, especially when it comes to input. There's a common core that needs to be handled at a low level under the hood, but in an extensible manner. Unity has already announced their intention of tackling this terribly difficult problem, but now we're just waiting for them to deliver on their promises.

They're the only ones in the position to really solve the problem at the right level, since it interacts so deeply with their rendering and input system, which is all under the hood so not possible for external developers modify, hook into, or fix problems.

Unity needs to integrate the input system with their new UGUI user interface toolkit, and integrate it with VR devices so all the different kinds of input devices plug in and are handled consistently. So they need to get very friendly and cooperative with all the different VR and input device hardware manufacturers they support, and do whatever it takes including flattering, shaming or partying them into cooperating if they foolishly decide they'd rather lock their developers into writing code specifically for their device, so that it doesn't work with other manufacturer's devices on purpose. (Here's looking at you, Apple!)

Unity has announced a new input system [1] and published a prototype [2], and asked for developers to give them feedback [3], but it's way too early to use in a product.

I'm optimistic that they mean what they say, and glad they're publishing the prototype and asking for feedback from developers, but I appreciate it's a difficult problem that will take a while to get right.

I've read over the documentation, and it seems to take a nice modular approach that can abstract away many differences between input devices, but it's just a hard problem by its very nature, and there will always be special circumstances for particular input devices that you need to handle on a case-by-case basis. So both the VR API and the input API should support hooks and customizations and plug-ins for special purpose hardware, and ways of querying capabilities (like position tracking), enumerating devices, reflecting on and hooking into the model.

I'm disappointed that neither the new UGUI user interface system nor the new input system prototype seem to fully support multitouch input and gestures in a nice way, like the TouchScript library does [4], but I hope they eventually support that as well.

One frustrating example of a bad API that both the Oculus and Unity SDKs have is the "recenter" call [5], which resets some hidden state inside the tracking code so the current direction of your head is "forward". You can't read and write the current yaw nulling offset [6], you can just "recenter", whatever that means. So there's no way to smoothly animate to recenter -- it always jerks your head around. They should expose that hidden state, and let me read and write it, so I can recenter smoothly. It's never a good idea to have a bunch of magic and hidden state behind an API like that. Plus the documentation is terrible -- they don't define what any of the terms mean, what the model is, or what the actual effect is mathematically. (Does it reset the yaw around the neck, resulting in a discontinuous jump in eye position? How can I tell what those measurements are, or know what the model really is? Does recenter work differently with devices that track the actual absolute position of your head, as opposed to estimating it from a model of the eye position relative to the neck?)

The input system should also support other kinds of gesture recognition like motion tracking (staring, tilting, pecking, shaking and nodding your head), popular "quantified self" input devices like the ShakeWeight with its optional heart rate monitor [7], network and virtual device adaptation, emulating and debugging hardware devices in the editor, etc. But right now it's so early in the game that the new input system prototype isn't yet integrated with the new VR API or the (less) new UGUI toolkit.

Oculus has examples of how to integrate head pointed ray casting with UGUI using reticles and cursors, but that code is brittle and dependent on their particular SDK and app-specific utilities, and it copies and modifies a bunch of the Unity UGUI input tracking code instead of subclassing and hooking into it, because Unity didn't expose the public virtual methods and hooks that they needed. They've beseeched Unity to fix that by making their API more public and hookier, but for now, Oculus's example UGUI integration code is pretty hacky, complex, and not a long term solution.

All the input tracking stuff will work much better when Unity fixes it under the hood (and lets developers open up the hood and hot-rod the fuck out of it, or shop for competing solutions on the asset store), instead of having 6 different VR and multitouch SDKs that all solve it in 6 different but overlapping ways, which you can't integrate together.

[1] http://blogs.unity3d.com/2016/04/12/developing-the-new-input...

[2] https://sites.google.com/a/unity3d.com/unity-input-advisory-...

[3] http://forum.unity3d.com/threads/welcome-new-input-system-re...

[4] http://touchscript.github.io/

[5] http://docs.unity3d.com/ScriptReference/VR.InputTracking.Rec...

[6] see "nulling problem" -- this is a GREAT paper: http://www.billbuxton.com/lexical.html

[7] https://www.youtube.com/watch?v=JImGs2Xysxs


I highly recommend that anyone working with user interface input systems should read this classic paper by Bill Buxton:

Lexical and Pragmatic Considerations of Input Structures

http://www.billbuxton.com/lexical.html

PRAGMATICS & DEVICE INDEPENDENCE

"From the application programmer's perspective, this is a valuable feature. However, for the purposes of specifying systems from the user's point of view, these abstractions are of very limited benefit. As Baecker (1980b) has pointed out, the effectiveness of a particular user interface is often due to the use of a particular device, and that effectiveness will be lost if that device were replaced by some other of the same logical class. For example, we have a system (Fedorkow, Buxton & Smith, 1978) whose interface depends on the simultaneous manipulation of four joysticks. Now in spite of tablets and joysticks both being "locator" devices, it is clear that they are not interchangeable in this situation. We cannot simultaneously manipulate four tablets. Thus, for the full potential of device independence to be realized, such pragmatic considerations must be incorporated into our overall specification model so that appropriate equivalencies can be determined in a methodological way. (That is, in specifying a generic device, we must also include the required pragmatic attributes. But to do so, we must develop a taxonomy of such attributes, just as we have developed a taxonomy of virtual devices.)"

Also check out Proton, which is a brilliant regular expression based multi-touch gesture tracking system, which would work very nicely for VR and multi-device applications. Proton is to traditional ad-hoc gesture tracking as Relax/NG is to XML Schema.

http://vis.berkeley.edu/papers/proton/


Check out InstantVR [1] as one input integration solution. I'm experimenting with it for switching between Kinect, Leap, and Hydra for hand/arm support. A completely native solution would be much better, but this is a decent stop gap until we get that.

[1] https://www.assetstore.unity3d.com/en/#!/content/23009


That looks quite useful! Thanks for the recommendation.


Once you have tried the HTC Vive, you'll know that any other VR system is already obsolete. Be it because of the experience or the hardware. I would adventure to say that the Vive makes all other systems look ridiculous.

My bet is that in the future, VR will become more and more similar to what the Vive is offering now. Room-scale VR is a game changer, as much as VR itself.


But Oculus will have room scale soon too: Oculus Touch is coming soon. It'll give controllers that add finger control to the mix, and a second camera to enable room scale.

I'm waiting for reviews and availability of the GTX 1070, as well as better headset availability before pulling the trigger on a headset. Hopefully Oculus has their controllers out by then, otherwise I probably won't wait for them. (I won't wait for AMD either; if they want me to consider Polaris, they better release it ASAP).


> But Oculus will have room scale soon too: Oculus Touch is coming soon.

They haven't even shipped all their headset preorders yet, I doubt "soon" is the word you want there.


Well.. yeah?

New technologies always results in lots of competing platforms, then consolidation, then stagnation, then a new competitor entering, new features, market chaos, then consolidation again.

And people will complain at ever stage.

And it has always been thus.


Let us also not forget Google funded company Magic Leap going head to head with MS Hololens in the Augmented Reality space.


People have differing needs and differing devices, so fracturing under the title "VR device" is perfectly normal. What developers need is a standardised API to access all of them with parts specialized to cater for each device's unique functionalities. Which will eventually happen. I cite ios+android libraries/frameworks as an example.


I'm afraid I don't understand the nuances between when an ecosystem has healthy competition vs when it is fractured.

Certainly if different systems support different amounts of a standard (eg browsers and the web standard), fractured seems like the right term.

Is the requirement for a fractured ecosystem just different APIs for the same thing? Are ML frameworks fractured? I suppose they would be.

What's the solution? Would you propose a VR standard? I imagine cardboard was an attempt at that. But standards are slow moving - not very effective at innovating.

I think a standard will naturally emerge once we know what should be standardized. But you don't want to kill innovation yet.


Or its a bunch of ideas thrown out there.. maybe 1 will stick and 3 will die. Sucks for early adopters with dead hardware, but if we in 2016 have the tech to make this work, our odds of hitting the money are pretty high right now.


That's what always happens. I'm glad those brave early adopters are out there, making difficult and expensive decisions and then regretting it so I don't have to.


Ignore the hype. Save your cash. Wait for version 2.0

You'd think we would have learned this by now, but I'm still way too excited about the vive.


The problem though is that the company able to push their product to win the market and the company with the best product probably won't be the same.


I think Android is a pretty good example of how "fracturing" is just FUD and not really that big of a deal.


Not even close to being the same thing. On android there's a common codebase that it's guaranteed to be there for all devices even if you have to add a couple of additional layouts for the odd cases, which it's what it mostly comes down to in this day and age. The same can't be said for VR right now.


It's not like there's a lot of freedom on how to express headset orientation. Every headset so far provides a quaternion you can poll. The word from developers like Northway Games with Fantastic Contraption is that even hand-motion controllers like the Vive and Oculus Touch are providing nearly identical interfaces.

No, you won't have literal one-to-one code, at least outside of WebVR. But I highly doubt porting is going to be that hard.

If anything, because you have to do the graphics on your own, there is an even greater chance of successfully building cross-platform systems. At the end of the day, it's all just GLSL.


...people commenting on this thread are using Mac, Windows, Ubuntu, Debian, CentOS, Android, iOS, and a few others...

Does that make total adoption lower and slower?


Well. It is all about games.My bet is on PSVR for now.


At the beginning it will be all about games, but once the technology is good enough (Gen 3/4) it will transition to be the universal computer interface (replacing standard monitors).


AR will. VR's immersiveness is actually a problem for non-entertainment uses.


There are more types of entertainment than games. It's easy to imagine VR Concerts, VR Sightseeing, and VR Movies / Television. There are also many practical VR uses that the immersiveness will add to: 3d modeling, real estate/architectural tours, teleconferences, remote debugging, systems design / modeling.


Actually, I expected to hear more about VR after the releases. Could it be that the hype is over before it began?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: