Interesting analogy with overfitting, though I would say this is more of an underfitting problem? i.e. we have enough 'training data' to know places where our current model fails (inconsistencies as author puts it, such as dark matter). Therefore our inductive biases must be too strong/incorrect, and relaxing them to increase expressivity would be the typical ML approach here.
The author's point that we must not add complexity to solve "non-problems" is very consistent with the ML analogy, though again this would mainly be to avoid adding too much inductive bias and underfitting as well.
The author's point that we must not add complexity to solve "non-problems" is very consistent with the ML analogy, though again this would mainly be to avoid adding too much inductive bias and underfitting as well.