Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LeCun's technical assessments have borne out over a lot of years. The likely next step in scaling vision transformers is to treat the image as a MIP pyramid and use the transformer to adaptively sample out of that. Requires RL to train (tricky) but it would decouple compute footprint from input size.


As someone who has worked in computer vision ML for nearly a decade, this sounds like a terrible idea.

You don't need RL remotely for this usecase. Image resolution pyramids are pretty normal tho and handling them well/efficiently is the big thing. Using RL for this would be like trying to use graphene to make a computer screen because it's new and flashy and everyone's talking about it. RL is inherently very sample inefficient, and is there to approximate when you don't have certain defined informative components, which we do have in computer vision in spades. Crossentropy losses (and the like) are (generally, IME/IMO) what RL losses try to approximate, only on a much larger (and more poorly-defined) scale.

Please mark speculation as such -- I've seen people see confident statements like this and spend a lot of time/manhours on it (because it seems plausible). It is not a bad idea from a creativity standpoint, but practically is most certainly not the way to go about it.

(That being said, you can try for dynamic sparsity stuff, it has some painful tradeoffs that generally don't scale but no way in Illinois do you need RL for that)


SPECULATION ALERT! I think there's reasonable motivation though. In the last few years there has been a steady drip of papers in the general area, at least insofar as they use vision transformers and image pyramids, and work on applying RL to object detection goes back before that. IoU and the general way SSD and YOLO descendants are set up is kind of wacky so I don't think it's much of a stretch to try to both 1) avoid attending to or materializing most of the pyramid, and 2) go directly to feature proposals without worrying about box anchors or grid cells or any of that. Now with that context if you still think it's a terrible idea, well, you're probably more current than I am.


Not bad frustrations at all. That said -- IoU is how the final box scores are calculated, that doesn't change how you do feature aggregation, this will happen in basically any technique you use.

Modern SSD/YOLO-style detectors use efficient feature pyramids, you need that to know where to propose where things are in the image.

This sounds a lot like going back to the old school object detection techniques which end up being more inefficient in general, generally very compute inefficient.


There's been a huge amount of work on image transformers since the original VIT. A lot of it has explored different schemes to slice up the image in tokens, and I've definitely seen some of it using a multiresolution pyramid. Not sure about the RL part - after all, the higher/low-res levels of the pyramid would add less tokens than the base/high-res level, so it doesn't seem that necessary. But given the sheer volume of work out there I can bet someone has explored this idea or something pretty close to it already.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: