Far much easier to suppress kernel/driver log of kernel addresses and deny access to /dev/kmem, et. al.
Leaving eBPF access open demonstratively has made way for file-less persistent malware to linger on unwantedly.
A real cybersecurity specialist would only allow eBPF access on host OS if no network access can be made to the host OS (and its ok for guest VMs to have eBPF).
An Uber cybersecurity goon, however, would compile out the eBPF JIT access from the Linux kernel (or use BSD-variant, instead).
Also worth looking into is seccomp profiles, although that's a bit different but useful for containers and securing your own code where the attack surface might be massive or you may be running untrusted code. Think trying to secure things like online language "playgrounds" from server side exploitation.
It might be worth it in certain cases of extreme security requirements, but the implications of what you suggest are severe. For example, you've ruled out the convenience of many operator actions. Instead of being able to change your resolver configs, first you'd have to change the resolver configs in the training environment, deploy a model that permits the old and new behavior into prod, then finally deploy your new configs. The same would be true for other things like timezone database updates. Any kind of external stimulus that changes your application's syscall pattern would require such forethought, and it could be a DoS vector.
Also, I think people underestimate the runtime cost of linux syscall tracing. It's pretty high.
Leaving eBPF access open demonstratively has made way for file-less persistent malware to linger on unwantedly.
A real cybersecurity specialist would only allow eBPF access on host OS if no network access can be made to the host OS (and its ok for guest VMs to have eBPF).
An Uber cybersecurity goon, however, would compile out the eBPF JIT access from the Linux kernel (or use BSD-variant, instead).