On the other hand, if a model fails on certain populations because not enough training data including them was input because they're historically seen as a less important subgroup, then you've simply encoded your societal biases in your model. Understanding that difference and pointing out problem spots like that is a great job for an ethical AI researcher.