The fundamental problem here seems to be scale (like often in tech).
I don't think it is correct to say that algorithms "enabled discrimination against job applicants". Otherwise we wouldn't have discussions about ageism or the amount of personal information which should go on a CV. Closed loop systems are also not something AI brought upon our world, otherwise the term "white privilege" would not have made it into public consciousness. There are racist hiring managers and interviewers who mentally discard anyone looking like their ex for sure.
The key is: All these effects are fairly contained in comparison. Companies can be sued, people can be fired and even systemic biases can be evaded in a lot of cases. I am not trying to say that e.g. sexism in hiring is harmless or anything, I'm just trying to weight two evils.
We do enter a disturbingly dystopian world when the AI-hiring start up brings their fancy algo to the mighty gods of cloud computing and just scale into infinity. Now suddenly hundreds of companies and millions of people are hit by the biases. Having discrimination at that level, personal damage compensation becomes a warm gesture at most just because the cultural deterioration done by subconsciously establishing a group as subpar workers is plain immeasurable.
And if you think about it, this exact problem transcends hiring. You have the right to kick everybody who is carrying something resembling a paint spray can out of your mall to prevent vandalism. You may discriminate some people just trying to get something done about the flies in their kitchen, sure. Happens. But if GoogleMaps, or Cloudflare, or Netflix does it, they are ever so subtly nudging the tech interest, or the information accessibility, or the quality of life, or the probability to get hired by minus a fraction of a fraction of a percent. For a billion people, or for all Hispanic people in the US, or for everyone using sub $300 phones.
That is a very real, very dark problem we should be all aware of in its greater context!
The other thing you didn't touch on is: lack of variance.
As markets consolidate, most companies will use one, maybe three different AI products to rate people. So if an AI decides that you're not fit for work, you're done for - no matter where you apply, it'll be that one AI returning the exact same result. There's no variance of different people looking at applications.
The markets will sort it out though. That filtered out candidate has some inherent worth and companies can take advantage of that. Competent hirers will then always use multiple AI hirers and experiment with hiring to make sure they're not losing out on talent that's been inaccurately filtered out.
CVs are already a horribly low signal - people lie way too much on them. I wouldn't expect AI to take into account anything you personally put on paper. Just like insurance/credit score AIs don't care about what you put down on papers much.
I believe you're missing the whole point. AI is often used as a fancy name for classifiers in general, including supervised and unsupervised learning algorithms.
Filtering the initial batch of candidates is basically a classification problem. It is also very time-consuming to perform by hand.
If recruiters are able to used supervised/unsupervised algorithms to filter out 90% of the initial candidates with a low misscladsification error and without wasting valuable time the they'll be able to operate more efficiently.
The trick is how to tell how far the auto classification algorithms should go in helping pick the best candidates. Some company use stupid fizzbuzz tests to weed out candidates based on a quantifiable index by abusing a metric whose value is negligible but very effective in weeding out a considerable portion of initial candidates.
There are several cases of problems. One is of course intentional discrimination. The larger problems are likely either a failure to care, consider, or attend to problems, or most insidious, side effects which arise entirely unintentionally.
The fact that much gradient descent machine learning is opaque to explanation means that such AI essentially becomes a new form of knowledge: like science, it provides answers, but unlike traditional Baconian scientific methods, it doesn't answer why or how, and fails to provide cause or mechanism.
Given use in increasingly complex, large-scale systems, without ready human review or oversight, this creates conditions for numerous unfortunate probably consequences.
It's already the case with credit-rating companies that slowly nudged themselves everywhere and are even being used by the government to check your identity. I bet that having a mistake on file at Experian can make your life a nightmare.
Once the market has consolidated itself there will be 2-3 companies controlling the vast majority of all hiring processes. At that point it will be easier to just ask what type of job their AI decided you would be a good fit for.
Think about how inefficient the hiring process is right now. Humans can only process very little information. There are only so many companies you can learn about and apply to. Most of the time people end-up in a local maxima and aren't really happy about their job. In contrast, the AI has access to hundreds of thousand of companies and profiles that they can match with each-other. They will claim how the AI knows what you truly desire and find the best fit that will make you truly happy.
Welcome to the future where you will get lessons in virtual-school on how to be best perceived by the big AI.
And thus instead of being outwitted by a superintelligent AI maximizing paperclips, we end up with a pantheon of blind idiot gods, tended to by priests who don't understand them, and kept alive with human sacrifice.
I wonder if it's possible to audit AI for discriminatory practices. Of course this would require a completely different set of legal processes (needing access to the algorithm, needing access to enough data to prove a bias) that could take decades to implement in a way that covers even a slim majority of applicable, obvious cases like race, sex, pregnancy status, religion, nation of origin, veteran status, and age.
Decades of the bar being just that much higher for the disadvantaged demographic. Potentially, a generation of disenfranchisement, which we know has generational effects. (The USA arguably still hasn't 'recovered' from segregation's economic effects on the populace)
All the EEOC would need to do is to prove that there is disparate impact[1] based solely on the hiring outcomes of companies using AI to hire. This is what they already do.
Given that AI are often not explanatory (see my earlier comment in this subthread), access to the algorithm may not be strictly necessary. Though the ability to black-box test it against a wide range of possible inputs might be a good thing to aim for.
I don't think it is correct to say that algorithms "enabled discrimination against job applicants". Otherwise we wouldn't have discussions about ageism or the amount of personal information which should go on a CV. Closed loop systems are also not something AI brought upon our world, otherwise the term "white privilege" would not have made it into public consciousness. There are racist hiring managers and interviewers who mentally discard anyone looking like their ex for sure.
The key is: All these effects are fairly contained in comparison. Companies can be sued, people can be fired and even systemic biases can be evaded in a lot of cases. I am not trying to say that e.g. sexism in hiring is harmless or anything, I'm just trying to weight two evils.
We do enter a disturbingly dystopian world when the AI-hiring start up brings their fancy algo to the mighty gods of cloud computing and just scale into infinity. Now suddenly hundreds of companies and millions of people are hit by the biases. Having discrimination at that level, personal damage compensation becomes a warm gesture at most just because the cultural deterioration done by subconsciously establishing a group as subpar workers is plain immeasurable.
And if you think about it, this exact problem transcends hiring. You have the right to kick everybody who is carrying something resembling a paint spray can out of your mall to prevent vandalism. You may discriminate some people just trying to get something done about the flies in their kitchen, sure. Happens. But if GoogleMaps, or Cloudflare, or Netflix does it, they are ever so subtly nudging the tech interest, or the information accessibility, or the quality of life, or the probability to get hired by minus a fraction of a fraction of a percent. For a billion people, or for all Hispanic people in the US, or for everyone using sub $300 phones.
That is a very real, very dark problem we should be all aware of in its greater context!