I'm currently exploring how different APIs compute ETags [1]. I'm inclined to think the overhead of Merkle-trees make them relatively less useful for ETags for paginated APIs where responses are small and flat hashing is fast. One rule of thumb would be: the overhead isn't justified unless you need either (a) efficient updates on very large collections, or (b) collection-level state tracking across pagination boundaries. Sound about right?
> W/ (case-sensitive) indicates that a weak validator is used. Weak ETags are easy to generate, but are far less useful for comparisons. Strong validators are ideal for comparisons but can be very difficult to generate efficiently. Weak ETag values of two representations of the same resources might be semantically equivalent, but not byte-for-byte identical. This means weak ETags prevent caching when byte range requests are used, but strong ETags mean range requests can still be cached.
[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
Also, ETags can use weak validators:
> W/ (case-sensitive) indicates that a weak validator is used. Weak ETags are easy to generate, but are far less useful for comparisons. Strong validators are ideal for comparisons but can be very difficult to generate efficiently. Weak ETag values of two representations of the same resources might be semantically equivalent, but not byte-for-byte identical. This means weak ETags prevent caching when byte range requests are used, but strong ETags mean range requests can still be cached.