This AI is being marketed to companies looking to improve their diversity. If it acts as you expect, then they just go out of business. I think the more likely scenario is that it achieves the desired outcome not by eliminating bias, but by inverting it, as that is more likely to result in measurable success from the customer perspective.
My concern would really be that it ends up selecting for normalness. I could easily see it biasing against the homely, obese, people who struggle with mental illness, or the just plain weird. Would anybody check to make sure it wouldn't fail that way like they would with a more high profile bias like gender? Seems unlikely to me. I imagine some would even regard that as a positive feature.
My concern would really be that it ends up selecting for normalness. I could easily see it biasing against the homely, obese, people who struggle with mental illness, or the just plain weird. Would anybody check to make sure it wouldn't fail that way like they would with a more high profile bias like gender? Seems unlikely to me. I imagine some would even regard that as a positive feature.