Hacker News new | past | comments | ask | show | jobs | submit login

How would this compare against producing a 3D mesh using traditional photogrammetry and comparing the CAD model and mesh for deviations? Or would this be unrealistic since the photogrammetrically produced mesh would lack the level of detail required?



That's a great idea - especially because photogrammetry with high quality cameras can have way higher detail than most of the common 3D sensors (realsenses, luxonis, etc). The big problems there are the computation cost and/or set up complexity of photogrammetry. You either need to do a lot of computation (a couple minutes on my RTX 4090 last time I did it for a medium sized object) to estimate keypoints and disparities or you need a really well calibrated ring of cameras, some way to feed parts through it at line rate, but could get away with less compute.

A laser scanner would probably make the mesh comparison approach easier, but it's still incredibly hard to get a really accurate and high resolution depth map in a short time span - especially if the parts are actively moving.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: