This is super interesting. The author states "the strong version of Goodhart's law" as a fact, but does not provide a theorem which shows that it is true. This recent paper does the job.[0] The authors write about Goodhart's law in the context of AI alignment but they are clear that their theorem is much more broadly applicable:
> we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility
> Our main result identifies conditions such that any misalignment is costly: starting from any initial state, optimizing any fixed incomplete proxy eventually leads the principal to be arbitrarily worse off.
One thing worth noting is that the metrics the author mentions (sales, likes etc) are clearly, as everyone would readily admit, not a true measure of value. At best, they're a proxy for what actually matters. And we know from Goodhart's law and reward hacking that optimizing a proxy is, at some point, either useless or actively counterproductive. This thought can be a real source of peace of mind.
>And we know from Goodhart's law and reward hacking that optimizing a proxy is, at some point, either useless or actively counterproductive.
I don't think that's accurate. It might be useless from the perspective of the system, but hugely advantageous from the perspective of the individual. When a co-worker hacks the metrics and gets the promotion, that might be bad for the company but it is great for them, and perhaps bad for you.
The same is true for sales, search engines, and social interactions.
I agree with your first point that the proxies are not true reflections, but don't see where this is peace of mind when someone loses out because of them. If anything, I think it would Foster anger that the system is rigged, basically the opposite of Peace of mind
> we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility
> Our main result identifies conditions such that any misalignment is costly: starting from any initial state, optimizing any fixed incomplete proxy eventually leads the principal to be arbitrarily worse off.
[0]: https://proceedings.neurips.cc/paper/2020/hash/b607ba543ad05...