Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

nice! I'd be interested in the method they use to put everything together. My best bet is some basic structure from motion weighted by the depth sensor...or maybe it's simpler than that...


Author here. It uses color info as well as depth for tracking. Otherwise, it'd fail if you pointed the camera at featureless geometry, e.g. walls, floors.


Looks great :)

In the video it looks like you are using a volumetric representation, perhaps an Octree+isosurface extraction?


Correct. It stores a volumetric signed distance function in a tree-structure, which makes mesh generation (and raytracing) simple and fast.


Can it stitch together much larger captures? eg: a building exterior and interior?


You can download and manually align separate captures to create a larger mesh but, no, Forge doesn't do this automatically yet.

On the roadmap, but won't be in the first version.


Looks neat - what depth sensor were you using to generate that video? It's not totally clear to me from the comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: