I think Apple should randomize data structure ordering, change flags and logic in the the memory allocator, and choose a different set of compiler optimizations with every release.
At least that way, most exploits and bugs will at least require an expert to put in substantial effort to update them to work on a new OS release, and many exploits won't be possible at all on a new release - if for example the exploit allows a stack buffer to overrun by 1 byte, then it depends what data follows the buffer - and if the compiler randomizes that, then in the next release it might become non-exploitable.
Is it marginal only for best-in-the-world experts and a serious hurdle for everyone else? If so that's still worthwhile as it means the attacker must hire (or be) an expensive expert.
My understanding is that most of these zero-days are runtime so the above wouldn't help at all. The most recent one took advantage of Apple Wallet taking first dibs on a (virus) image and loading in the payload. Changing data structures/flags/compiler optimizations wouldn't have made a difference.
The process of going from [malicious image which gets loaded by apple wallet] to [shellcode running] depends hugely on compiler flags, memory layout, etc.
I think Apple should randomize data structure ordering, change flags and logic in the the memory allocator, and choose a different set of compiler optimizations with every release.
At least that way, most exploits and bugs will at least require an expert to put in substantial effort to update them to work on a new OS release, and many exploits won't be possible at all on a new release - if for example the exploit allows a stack buffer to overrun by 1 byte, then it depends what data follows the buffer - and if the compiler randomizes that, then in the next release it might become non-exploitable.