Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I have a feeling people are going to reply without reading this through and assume the author poses a Deep Learning vs Classic CV sort of argument, in a deep-learning-is-overrated sort of way. Whereas it seems to me he is merely saying Deep Learning should be informed by Classic CV.

Thank you for saying this! You were indeed prescient - nearly all the comments so far at total non-sequiturs based on what the article actually shows.

Folks, please take a look at the actual GC-Net architecture, which is very nicely shown in the last figure in the post. As you'll see, it's still an end-to-end learnt deep learning model. The extremely neat trick is to come up with a way to represent the cost volume piece (that connects the 2d and 3d parts of the CNN) as a differentiable function, which allows it to be part of an end-to-end learnt neural network.

This approach, of finding differentiable forms (often approximations) of domain-specific loss functions, transformations, etc so they can be inserted into neural nets, is a very powerful and increasingly widely used technique. It's not about replacing domain knowledge with deep learning, and it's definitely not about replacing deep learning with domain knowledge - it's all about using both.

So in this case, the research is showing how to use geometry plus deep learning, not instead of deep learning.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: