My experience with having a human mind teaches me that bias must be actively fought, that all learning systems have biases due to a combination of limited sample size, other sampling biases, and overfitting. One must continuously examine and attempt to correct for biases in pretty much everything.
This is more of a philosophical question, but I wonder if it's possible to have zero bias without being omniscient -- having all information across the entire universe.
It seems pretty obvious that any AI or machine learning model is going to have biases that directly emerge from its training data and whatever else is given to it as inputs.
I don't think the word bias is well enough specified in discourse to answer that question. Or maybe I'd say it's overloaded to the point of uselessness.
Is bias 'an opinion at odds with reality'? Is it 'an opinion at odds with an ethical framework'? Is it 'an opinion that when applied makes the opinion true'? Is it 'an opinion formed correctly for its initial priors, but now incorrect with updated priors'? Is it 'an opinion formed by correctly interpreting data that does not accord with a social concept of "neutral"'?
All these get overloaded all the time as far as I can tell. I'd love to see tests for all of these. We tend to see only the 'AI does not deliver a "neutral" result' studies, but like I said above, very little assessment of the underlying to determine what that means.
> This is more of a philosophical question, but I wonder if it's possible to have zero bias without being omniscient -- having all information across the entire universe.
It’s not. It’s why DEI etc is just biasing for non white/asian males. It comes from a moral/tribal framework that is at odds with a meritocratic one. People say we need more x representation, but they can never say how much.
There’s a second layer effect as well where taking all the best individuals may not result in the best teams. Trust is generally higher among people who look like you, and trust is probably the most important part of human interaction. I don’t care how smart you are if you’re only here for personal gain and have no interest in maintaining the culture that was so attractive to outsiders.
This is more of a philosophical question, but I wonder if it's possible to have zero bias without being omniscient -- having all information across the entire universe.
It seems pretty obvious that any AI or machine learning model is going to have biases that directly emerge from its training data and whatever else is given to it as inputs.