Only if blocks have the same offset in the same binary and if they align with the block boundaries. Otherwise, different hashes would be generated. I don't expect that to happen a lot.
There are ways around this. See "content-aware chunking", e.g. implemented using rolling hashes [1]. This is for example what rsync does.
The idea is to make blocks (slightly) variable in size. Block boundaries are determined based on a limited window of preceding bytes. This way a change in one location will only have a limited impact on the following blocks.
Rolling hashing is really only useful for finding nonaligned duplicates.
There isn't a way to advertise some "rolling hash value" in a way that allows other people with a differently-aligned copy to notice that you and them have some duplicated byte ranges.
Rolling hashes only work when one person (or two people engaged in a conversation, like rsync) already has both copies.
I think you misunderstood how the rolling hash is used in this context. It's not used to address a chunk; you'd use a plain old cryptographic hash function for that.
The rolling hash is used to find the chunk boundary: Hash a window before every byte (which is cheap with a rolling hash) and compare it against a defined bit mask. For example: Check if the first 20 bytes are zero. If so, you'd get chunks with about 2^20 bytes (1 MiB) average length.
If I discover that the file I want to publish shares a range with an existing file, that does very little because the existing file has already chosen its chunk boundaries and I can’t influence those. That ship has sailed.
I can only benefit if the a priori chunks are small enough that some subset of the identified match is still addressable. And then I may only get half of a two thirds of the improvement I was after.
that does very little because the existing file has already chosen its chunk boundaries
If they both used the same rolling hash function on the same or similar data, regardless of the initial and final boundary and regardless of when they chose the boundaries, they will share many chunks with high probability. That’s just how splitting with rolling hashes work. They produce variable-length chunks.
The idea is that on none random data, you are able to use a heuristic that would create variable-sized chunks that fit the data. The simplest way seems to detect padding zeros and start a new block on the first following none zero byte. There probably are other ways, knowing the data type should help.
That seems fairly unlikely. Not a lot of big files have zero padding, and if they did them compress them. It will reduce your transfers more than and range substitutions ever will.
> Identical files will always have the same hash and can more easily be moved from one torrent to another (when creating torrents) without having to re-hash anything.
Well if files in an ISO are aligned to some boundary it would help a lot, similar to how filesystems on disk have a sector size, where all files begin at the start of a sector. However, I don't know if this is true in ISO9660 or any of its extensions.