Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My understanding is big problem is stationary objects which mostly get filtered out. With more processing power you might be able to do something about that.


My question is where the break-even point is between adding more resolution / frames-per-second versus performing more expensive analysis. Given that the human visual system is not especially high resolution or speed, I would bet that they're already at the point of diminishing returns on the sensor side and the big wins would come from more expensive processing techniques, which https://news.ycombinator.com/item?id=17671843 supports.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: