There was something unsatisfactory to V1 users in the V2 system. Depth was jittery or whatever it had been.
V1 was based on an Israeli structured light tech that later became Apple TrueDepth, a static dot pattern taken with a camera which deviations becomes depth. V2 was technically an early and alternative implementation of Time-of-Flight LIDAR based on a light phase delaying device, that output deviations from set known distance as pictures or something along that.
There wouldn't have been lacking app support issues if V2 did work. There was/were something that made users be "yeah... by the way" about it.
The problem with the v2, from someone who had a boss that loved the kinect and used it in retail environments experimentally wasn't the new tech. The new tech was/is amazing. The level of detail you can get from the v2 dwarfs the v1 on every axis.
The problem was that it ONLY had a windows SDK and most of the people who did amazing work outside of games and the xbox with the kinect v1 were using it with macs and linux in tools like openframeworks and processing. The v1 was developed outside microsoft and primesense and there were SDKs that were compatible cross platform. Tons of artists dove right in.
The KinectV2 only offered a windows sdk, and that's what killed it in the 'secondary market'.
V1 was based on an Israeli structured light tech that later became Apple TrueDepth, a static dot pattern taken with a camera which deviations becomes depth. V2 was technically an early and alternative implementation of Time-of-Flight LIDAR based on a light phase delaying device, that output deviations from set known distance as pictures or something along that.
There wouldn't have been lacking app support issues if V2 did work. There was/were something that made users be "yeah... by the way" about it.