As an engineer, I've always resented the unnecessary time I spend waiting for data in web browsers, so the bad state of caching in web browsers is an issue that's been on my mind for a long time.
There are a few ways to improve things:
1. Predictive modelling user resource demand in the browser (eg. preloading data) Very very easy to do nowadays with great accuracy.
2. Better cache control / eviction algorithms to keep the cache ultra hot.
3. (This). Immutable caching is one of the major ways we could improve things. I'm not a fan of the parent articles' way of doing it though, because if widely implemented it will break the web in subtle ways, especially for small companies and people that don't have Facebooks resources and engineering talent. It doesn't take into account usability issues and therefore leaves too much room for user error.
I'll throw out a challenge to HN. If someone here knows chromium internals well enough to expose the cache and assetloader to electron, within a few months I'll release an open source ML powered browser that speeds up their web browsing experience by something between 10X-100X. Because I feel like this should have been part of the web 10 years ago.
The size of your URLs will be an issue for pages with lots of assets. So much so that I bet if everything's warm in the cache except the page itself, then this proposal would be slower.
I'd rather see stale-while-revalidate implemented in browsers.
A lot of webapps already use this type of scheme for cache-busting, though of course the hash algorithm isn't mentioned. I assume that's important so that the browser can do verification?
There are a few ways to improve things:
1. Predictive modelling user resource demand in the browser (eg. preloading data) Very very easy to do nowadays with great accuracy.
2. Better cache control / eviction algorithms to keep the cache ultra hot.
3. (This). Immutable caching is one of the major ways we could improve things. I'm not a fan of the parent articles' way of doing it though, because if widely implemented it will break the web in subtle ways, especially for small companies and people that don't have Facebooks resources and engineering talent. It doesn't take into account usability issues and therefore leaves too much room for user error.
I've written up a very simple 11 line spec here that addresses this issue. https://gist.github.com/ericbets/5c1569856c2ad050771ec0c866f...
I'll throw out a challenge to HN. If someone here knows chromium internals well enough to expose the cache and assetloader to electron, within a few months I'll release an open source ML powered browser that speeds up their web browsing experience by something between 10X-100X. Because I feel like this should have been part of the web 10 years ago.