They trained (and report) two optimization strategies:
- inline-for-size
We trained the inlining-for-size policy on a large internal software package containing 30k modules. The trained policy is generalizable when applied to compile other software and achieves a 3% ~ 7% size reduction.
- regalloc
with 0.3% ~1.5% improvements in queries per second (QPS) on a set of internal large-scale datacenter applications
Try it Yourself
Check out the open-sourced end-to-end data collection and training solution on github and a demo that uses policy gradient to train an inlining-for-size policy.
https://github.com/google/ml-compiler-opt
https://github.com/google/ml-compiler-opt/blob/main/docs/demo/demo.md
- inline-for-size
- regalloc With code, that's awesome—what I like to see.