Larger address spaces are useful, even if you don't have 2^64 bytes of physical RAM. For example, most 32-bit Firefox crashes are OOM crashes due to virtual address space exhaustion.
Even if we have more than 2^64 bytes (that's 16 EB, not really an astronomical amount) of RAM, it's not obvious that the optimum way to use them will be by placing everything at the same address space.
Yea i'd fully expect something like PAE to be more useful and common. Even if something is using more than 16EB of ram, i can't imagine it'll be a single process and instead it'll be larger distributed system instead, leaving each process smaller than that. Just filling up 16 EB of ram right now would take:
But, it takes longer each time. With exponential growth (e.g. Moore's Law) then you use up bits linearly. If you double every two years, then you use one bit every two years. That means we have about 32 years from when 32-bit became inadequate to when 64-bit becomes inadequate.
And with Moores Law running out of steam (isn't physics a bitch!) this might not even ever happen. At least not in the sense which drove us up to 64 bit. There are of course plenty of reasons why we would like to do math with 128 or more bits, but for memory addresses, 64 bit are a plenty.
64 bits are fine until you want to share address space over a whole bunch of nodes.
We currently have supercomputers with tens of gigabytes per processor and tens of thousands of processors. Already up to 51 bits. We're also working out fast persistent storage like NVMe and crosspoint that allow us to attach a board with terabytes where we used to attach gigabytes of DRAM. Combine those two and you can max out a 64 bit address space with current technology, let alone tomorrow's technology.
So let's crunch the numbers a bit. There are already supercomputers with 40k CPUs, let's go all out and plan to use 80k CPUs. 8TB NVMe cards have been out for a couple years, we should be able to get 16TB cards by the time we're assembling. We should be able to fit four per CPU with a compact design. That gets us to 5EB. That's dangerously close to the 16EB limit, and that's with 2016/2017 technology. And the point where you start having trouble isn't 16EB, it's under 8EB, when you stop being able to use the normal user/kernel split or have all the memory mapped into kernel space.
> Then again, super computers aren't really running the same architecture as the rest of us.
Sure, but there are pushes to make datacenters act more like a giant computer, so there are niceties to a shared address space that aren't restricted to traditional supercomputers.
That will take a while. And even then, people will likely keep using 64 bits for most things.