Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Echoing my sibling commenters here, I’d love to hear more about this situation. Usually “inlining” and algorithmic complexity are juxtaposed as orthogonal types of optimization, with algorithmic optimization typically even being the reasoning for why the level of abstraction doesn’t matter very much. You typically get a much better speed up from going from O(n^2) to O(n) algorithms than from implementing e.g. Duff’s device, etc



My apologies, forgot I posted.

On phone, so will be brief. Will try a longer response later. (Apologies to other responses, not hitting them all.)

Basic point is that many will abstract out data to different locations and ownership lifetimes. So, congrats, you kept us at a lower complexity, but failed to realize you need to first essentially reindex all of the data for that to happen.

Now, I grant that often the problem is more in the data scattered over any number of databases. If there are reasons to keep that spread, though, hard to just sweep it aside.

And then there are those that don't accept, "send it to a solver." Really annoying how often people assume they can easily beat cplex and friends.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: