Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with the original Kinect (v1) is that good tracking software for it was never really written. Most application that support it, just use the original Microsoft SDK to do the motion tracking, and it's just not very good, the main issue is that it always assume that the tracked person is directly facing the camera, and is very bad at dealing with occlusion. The good thing about it, is that it ran in real-time on a potato.

In order to get a good result, someone would need to train a good model for HPE that could use the cloud point data directly, but it seems nobody cares about depth sensor anymore, most efforts are going to HPE from regular 2d video (like media pipe holistic model). And given the result you can get with media pipe, openpose and the likes, it's understandable nobody is bothering working with low resolution cloud point anymore for 3D HPE.

The only use-case I can think of for a Kinect v1 in 2025, would be robotics if you want a low latency low resolution cloud point for your robot control, but even there I think we are moving to big vision model capable of making sense of regular video feeds.



There might be some work at https://k2vr.tech regarding this it seems like.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: