One real danger is that there are a lot of data points that correlate with race and gender. Machine learning can become real good at inferring race and gender from what would appear to people to be unrelated variables but that, in fact have a small correlation. In the end, it is easy to use “AI” to whitewash your biases without explicitly targeting protected characteristics.
Anything that doesn’t produce the desired outcome is a nonstarter. There will always be some new group to advocate for. If such a group does not exist, one will be generated through intersectionality.
This AI is being marketed to companies looking to improve their diversity. If it acts as you expect, then they just go out of business. I think the more likely scenario is that it achieves the desired outcome not by eliminating bias, but by inverting it, as that is more likely to result in measurable success from the customer perspective.
My concern would really be that it ends up selecting for normalness. I could easily see it biasing against the homely, obese, people who struggle with mental illness, or the just plain weird. Would anybody check to make sure it wouldn't fail that way like they would with a more high profile bias like gender? Seems unlikely to me. I imagine some would even regard that as a positive feature.