I find ironic that we are talking about ML where we have vectors of thousands of quantities and then we go to measure social/economic stuff with one (or a couple of numbers).
The general discourse (news, politicians, forums, etc.) over a couple of measures will always be highly simplifying. The discourse over thousands of measures will be too complex to communicate easily.
I hope that at some point most people will acknowledge implicitly that the fewer the number of measures the more probable is that it is a simplification that hides stuff. (ex: "X is a billionaire, means his smart"; "country X has high GDP means it's better than country Y with less GDP" and so forth).
> I hope that at some point most people will acknowledge implicitly that the fewer the number of measures the more probable is that it is a simplification that hides stuff.
But the larger the number of measures, the more free variables you have. Which makes it easier to overfit, either accidentally or maliciously.
The general discourse (news, politicians, forums, etc.) over a couple of measures will always be highly simplifying. The discourse over thousands of measures will be too complex to communicate easily.
I hope that at some point most people will acknowledge implicitly that the fewer the number of measures the more probable is that it is a simplification that hides stuff. (ex: "X is a billionaire, means his smart"; "country X has high GDP means it's better than country Y with less GDP" and so forth).