Have we even figured out what bias we care about? Race, gender, age, etc. are some potential problems, but is that it?
These sorts of problems are often formulated theoretically (“suppose we want fairness with respect to variable Z”). It seems that half the battle is to figure out what to be fair with respect to. Often, the fairness variable in question isn’t even a feature, but is implicitly in the data (e.g., race in human photos). Therefore, the fairness space is potentially infinite.
For example, maybe life insurance models are biased toward those predisposed to developing cancer. Maybe ads target those suffering depression. You can continue partitioning the space in this way forever, and, therefore, it seems that the algorithms are relatively straightforward if you could formalize the bias requirements.
This is before you even consider the fairness variables interacting (e.g., age and gender and race), which requires potentially normalizing across exponentially growing feature combinations.
Can you articulate how this particular Twitch streamer is employing deep knowledge and thought to improve her work? To the tune of $250k+ compensation? Can't anyone hire someone with experience in theater/film lighting to setup their stream studio with a day's labor?
These sorts of problems are often formulated theoretically (“suppose we want fairness with respect to variable Z”). It seems that half the battle is to figure out what to be fair with respect to. Often, the fairness variable in question isn’t even a feature, but is implicitly in the data (e.g., race in human photos). Therefore, the fairness space is potentially infinite.
For example, maybe life insurance models are biased toward those predisposed to developing cancer. Maybe ads target those suffering depression. You can continue partitioning the space in this way forever, and, therefore, it seems that the algorithms are relatively straightforward if you could formalize the bias requirements.
This is before you even consider the fairness variables interacting (e.g., age and gender and race), which requires potentially normalizing across exponentially growing feature combinations.