Security is a complex and deeply evolved field. Many modern required security practices are quite recent from a historical perspective because we simply didn't know we would need them.
A safe security first OS from 20 years ago would most likely be horribly insecure now.
The billions that have been lost, and the lives that have been lost, have, in the blink of an eye, rendered the "too costly to implement" argument moot.
For bean-counting purposes, it's just really convenient that the burden of that cost was transferred onto somebody else, so that the claim can continue to be made that another solution would still be too costly to implement.
Accepting the status-quo that got us here in the first place, under the pseudo-rational argument that there are not realistic alternatives, is simply putting ones head in the sand and careening, full steam ahead, to the next wall waiting for us.
That there might not be an alternative available currently does not mean that a new alternative cannot be actively pursued, and that it is not time for extreem introspection.
> To see this, here’s an exercise you can do yourself, if you have any training at all in advanced mathematics. Take a fairly generalist journal, like the Journal of Algebra (take a topic in which you have expertise — my doctoral thesis was in algebra). Look at some of the papers. How many of them are truly interesting to you?
> These days, I can look at any of these journals and find at most one or two papers that are even remotely amusing, and algebra was my specialty. On the other hand, I can take a journal in biology like the Journal of Animal Behavior and still find quite a few papers in each journal that are interesting to me even though I’m not even a research biologist! Keep in mind I still like mathematics a lot, and I still enjoy algebra.
Can't you also say this is directly disproving his point as well? It might be that there are so many open interesting problems that we can become highly picky what problems get solved to the point these preferences are shares between less people. Indicating an expansive set of problems instead of an exhausted one instead.
> And the best way to verify a compiled binary is correctly compiled is to do the compilation. That's a trivial amount of work. Constructing a ZKP of the same is certainly more cost intensive.
This makes the assumption that you both have the source code and that the compiler is deterministic. With the latter one being not the case for (most?) modern compilers.
Ah yes, the totally-valid use case of ensuring that a third party has properly compiled source code that you don't have access to. I'm sure this comes up in industry all the time.
Can you provide an example of a ZKP of memory safety of a compiled output that would be less computationally expensive than static analysis of the binary?
> the more you brag about having the money to retire, the more signals you send to the Fed and policy makers to keep cranking up the money supply treadmill to keep the pressure on working people.
I hope federal banks are smart enough to not base monetary policy of anecdotes
Federal banks don't make decisions, people who work at federal banks do. And people often do make decisions based on anecdotes. Especially if they already support an idea that were already in favor of. So, yeah.
A technique that is applied to widely different fields obviously yield a large set of interpretations each with the lense of their own field.
But that doesn't invalidate any of those interpretations no?
Those heuristics are about "when" to apply certain transformations in a situation when two options are already proven equivalent. That is different from transforming correct code into possibly incorrect code.
A safe security first OS from 20 years ago would most likely be horribly insecure now.