Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have to refetch on a cache miss you're going to be doing both. But all optimizations are always playing with the trigraph of cpu time, memory, and IO (with the hidden fourth dimension of legibility), so I don't think you're saying anything that can't be assumed as given. Even among people who tend to pick incorrectly, or just lose track of when the situation has changed.


I understood the OP to have said something along the lines of if we have a fixed cost per object then we should bias towards smaller objects if we want to minimize that cost.

And totally legibility and/or simplicity. I'll take something I can reason about and maintain over something more complicated just to eek out a tiny better hit ratio. That said, if you're caching at scale your 0.05% hit ratio can be a big deal.

As a matter of personal taste/opinion I also shy away from close loop systems. Feedback makes things complicated in non-intuitive ways. Caffeine seems neat in terms of using feedback to adjust the cache to the workload - as always test with your workloads and pick what is best for your situation.


The thing is that untangling the logic often reveals either new feature opportunities that would have been ugly to implement previously, or new optimization opportunities because the code is clearer, and possibly two things that seemed unrelated before are now obviously related.

If I can't figure out how to make code faster without cheesing it I'll just follow the campsite rule and hope for inspiration (the more you work with code the more you understand it, and I might as well clean while I'm here)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: