Is that overfitting? I thought overfitting was explicitly when your model fits itself so well to the training data that its ability to generalise to your target environment begins to decline. The size of the training set and the performance on it don't really matter, all that matters is that performance in the specific environment you intend to deploy the model in starts dropping. In your case, you're training the models for a very narrow deployment target, and they remain competent.
Couldn't you use this logic to say that AlphaGo is overfit because it can only play Go, not chess?
Couldn't you use this logic to say that AlphaGo is overfit because it can only play Go, not chess?