Even old operating systems like Windows NT4 could easily memory map a window into a file as large as 256 MB at a time, and could map multiple segments concurrently. It could do this even if there was much less physical memory available, because this is a virtual memory mapping only.
Well-written code would start with a sliding window of some reasonable size such as 64 MB, and if that failed would try halving it repeatedly down to some lower threshold.
Unfortunately, the 64-bit era has lead to a "pit of failure" where many programmers incorrectly assume that this means that 2⁶⁴ bytes can be mapped reliably in a single call. This is never true, because of all sorts of operating system and hardware limitations.
I've seen "modern" code written with this assumption, such as a Rust library and a couple of C# libraries. They fail on older Xeons, some hypervisors, and 32-bit platforms.
Even in 2021 server applications run as 32-bit surprisingly often. For example, Azure's App Service defaults to 32-bit on the low-end tiers to save memory.
Well-written code would start with a sliding window of some reasonable size such as 64 MB, and if that failed would try halving it repeatedly down to some lower threshold.
Unfortunately, the 64-bit era has lead to a "pit of failure" where many programmers incorrectly assume that this means that 2⁶⁴ bytes can be mapped reliably in a single call. This is never true, because of all sorts of operating system and hardware limitations.
I've seen "modern" code written with this assumption, such as a Rust library and a couple of C# libraries. They fail on older Xeons, some hypervisors, and 32-bit platforms.
Even in 2021 server applications run as 32-bit surprisingly often. For example, Azure's App Service defaults to 32-bit on the low-end tiers to save memory.