Amazon started using such a system a while back trained on their hiring data, and it started very heavily selecting against applications with female-indicating proxies, and was abandoned.
In another case,
After an audit of the algorithm, the resume screening company found that the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse
which reads like a comical stereotype of a young male New England WASP; honestly kind of impressive.
The thing is, this is a laughable anecdote because clearly there is no causal relationship between lacrosse and job performance. But should a company care? Surely what matters most is whether a factor is predictive of job performance. We can assume that playing lacrosse was predictive, does it matter to Amazon if the relationship is causal?
Obviously this doesn't take into account political pressures to achieve outcomes other than that hiring the candidate most likely to perform well.
> clearly there is no causal relationship between lacrosse and job performance
There's almost certainly a correlation due to success in lacrosse having a lot of crossover with success generally. A young man who plays lacrosse is probably going to be physically healthier than average. It's also a very upper middle class sport, typically played by wealthier types. And children who grew up in wealthier households tend to be ... well, better at everything, really. Hockey is a great indicator of that here in Canada; the equipment is expensive.
I wouldn't be surprised if it's causative, either. Sports may improve cognitive performance, and they tend to teach cooperation and leadership. Plenty of studies suggest it helps academics.
If I felt free to discriminate and if I knew nothing else about a candidate other than whether they had played a physically-demanding and expensive team sport within the last year, I would hire the ones who had over the ones who had not. It'd skew heavily towards fit 20-40 something guys from middle-class backgrounds with college educations. Surprise! Probably what the algorithm picked up, indirectly. Of course, it also blatantly skews white, male, and economically elite.
The promise of AI is that it will be able to find associations and figure out things that humans normally wouldn't - so the AI is now finding associations and figuring out things that humans previously didn't notice, and we're dismissing it.
Are any of these previously unnoticed associations? I think the problem here is that they are not typically informative or predictive of “success” — whatever that means here. I don’t need a model that is essentially a shitty heuristic (or collection of them) because I can get that for free - ex. “we only recruit / hire from the ivys”.
Because it has about the same level of class and intellectual rigor as “hire a Jew because they are good with money” or “hire a chinaman because they are good with numbers”.
Also not a lot of white guys named Jared playing lacrosse at Oxford or Cambridge (or Peking University).
If I knew nothing other than whether someone was Jewish or not, and if I felt free to discriminate, I would, of course, hire the Jewish person. On average they do better than typical Americans across the board. In reality, I don't feel free to discriminate. So it's a problem if I know that a candidate is Jewish, because I'm clearly biased and working off a stereotype there. So is the AI. Maybe that helps clarify the domain my thought experiment was operating in.
Thought the same about satire, but I fear it is not. It remains ridiculous though. It really is astonishing how people try to connect dots that are more or less random.
I bet the lacrosse part is due to the sport being played among academics and then by nobody else. Cue for the AI without intelligence to mark it as an inclusion criterium. Similar to what Golf was a few years ago. A unique property to differentiate academics and non-academics.
It's not an assumption that sports is associated with general success and intelligence. I felt it was well-established enough of a fact, that there's no need to link to studies showing that. It's even the basis for public policy in my country; we do sports-based interventions for troubled youth; such programs are one of the few things that seem to actually work to keep young men going to school and improve academics.
Now if you mean I'm making an assumption about a candidate, based on nothing more than one weak indicator, well yes. That was the point of my thought experiment. If you only knew whether they played sports, while not a very good indicator, it's better than flipping a coin. Ultimately, that's what hiring is. You have some indicators like education and previous work experience, and you make an assumption based on those facts, none of which align 100% with how they're going to perform at the job you're hiring them for.
>Now if you mean I'm making an assumption about a candidate
That was even more obvious than the sports and cognition fact.
Here's the thing: you're doing the exact thing that caused the problem in the first place. Creating thought experiments and narrating them at length regarding scenarios which are a rarity, given a problem which occurs frequently, in an environment with thousands of other variables.
In a world where we do in fact have better indicators which are still not used, despite being well researched. We have plenty of options remaining before we need to be this nitpicky.
It's hard to imagine an indicator that wouldn't be closely correlated with family wealth and social status. Wealth and social status are the primary means to access the things that make a person more likely to be successful! Almost any indicator you pick will end up being a proxy for those, to some degree.
> clearly there is no causal relationship between lacrosse and job performance
IMO it could have an effect on "soft skills". Learning to have an authority push you to beyond your known limits. Working and training within a team to achieve a goal. Learning as a team to face and overcome weaknesses. Social skills that go with it. Humility from dealing with loss. Learning to celebrate victory with dignity.
There's a lot of educational positives that come from team sports. It's certainly not the only place that a person can get those skills, but it's a very common one.
IMO it's one of the biggest benefits of US football simply because the teams are so large, diverse and position groups have different but complimentary jobs.
Just like there are different versions of "good" leaders and "good" workers in any given field, there are different types of "good" in sports.
Similarly, sports has a much more straight-forward rewards system; score a point, prevent the opponent from scoring, immediate result and reward.
For anything technology related, the action => reward system tends to be far more delayed and requires a different approach and type of patience. Whether it's Devops, Programming, Administration, or Support, even if you do everything right, the reward might be delayed or sullied by negative rewards. (e.g., network is never fast enough for users, it's still too complex to use your program, the solution is just not desired by the user)
Again I do understand the premise of your statement, and for some situations I can agree there is crossover, but my experience in hiring for technical positions is that sports participation has been a non-factor at best. Too often, it's been a negative one because the rewards for technology related challenges tend to be very delayed or difficult for the participant to recognize. Success in Technology is very personal and goes beyond the tangible result, and instead is a victory of the self as you grow intellectually and better understand something. This does happen in sports also (you train yourself to recognize openings that maybe didn't exist for you before or you couldn't act on due to physical limitations), but there are other rewards that make sports more compelling.
Athletes spend tens or hundreds of hours training for every point scored.
Tenacity is a strong signal I look for when hiring. Not being afraid to ask for help but being willing to put in the time when it is needed. Sports are open-ended. With school exams you know how many points you can achieve so you know when to stop. In life you don't know how many points can be scored or how much you'd need to train to do so meaning that you need a lot more self-assessment.
I don't use sports as a metric but they seem a lot more valid than many signals I see other reviewers use.
I think contact sports in general help people to develop fortitude.
Lacrosse, hockey, football, rugby, etc.
It’s a difference from just developing a skill in terms of mentality.
Within a single sport I’ve always found the offensive line position in football to be unique as well. It’s the only position in sports where your only job is to protect other players.
Selecting for lacrosse players (which as you say, has no causal relationship to job performance) necessarily selects against non-lacrosse players. Lacrosse players tend to be white and male,
Amazon should care, due not only to obvious legal risk, but also because they are excluding candidates who otherwise do as well or better than a particular lacrosse player. Why would it make any kind of sense to artificially shrink their candidate pool?
This is also a moral thing, not just a political thing. It's not right to discriminate against people for reasons other than job performance.
> It's naive to expect big corporations to act morally. Best you can do is force them to act legally.
That's not the only avenue. Big corporations do what their managers, customers, employees and investors want.
Eg restaurants will typically try to serve tasty food, even if there's no legal requirement to do so.
Similarly Google has free lunch, because that's what employees want.
The recent fad for ESG shows that investor can want 'morality', too, and companies are willing to satisfy them. (I quote 'morality', because different people have different ideas, and often people only want the fig-leaf of morality, but not pay the price for what they would consider real morality. See eg greenwashing.) Customers and investors etc get what they are willing to pay for, and that's not necessarily what they say they want.
But the explicit, stated goal of diversity efforts is specifically to exclude candidates who otherwise might do well, but are not diverse, because non-diverse teams can't perform as well as diverse teams.
There's several ways of something not being causal. You're assuming here that there's a reliable proxy variable (eg., wealth -> good education -> "Jared") -- which is unlikely to be the case. These proxies will disappear: poorer people will adopt those baby names.
Whilst, yes, everything else being equal some non-zero knowledge of reliable proxies is useful. But in a hiring process, not everything is equal. You can, eg., trivially examine the candidate.
its trivial to get actually relevant data to hiring, making "AI" dangerously useless. Is substitutes a high-quality process (measurements of the actual applicants) for a low-quality process (random statistical properties of aggregates of existing employees).
The story of "AI" over the last decade, it must be said.
Weirdly enough, (poorer) people don't seem to be very good at picking names that signal success. So that proxy is pretty good.
If you are poor and want to give your child a name to signal success, you'd need to look at the names rich people are giving their kids right now. In practice at most we are seeing poor people imitating the past naming practices of rich people. (Of course, that's just a generalisation. Individual people make their individual choices.)
Sure, I think the more important point is that if I can ask you anything, that's immediately far suprerior information about you than any average bucket you happen to fall into.
There is a positive correlation between athletic ability and intellectual ability; strong bodies and strong minds correlate with each other (both are downstream of good nutrition and genetics.) Furthermore successful participation in team sports implies good teamwork and social skills.
Well obviously the solution here is to set aside a few million dollars a year of public money to send a dozen randomly selected children from low income households and disadvantaged backgrounds to learn to play lacrosse.
> Surely what matters most is whether a factor is predictive of job performance
not go all critical race theory, but is this all that matters? should there not be a place in society for people who have lesser aptitude for corporate work?
> We can assume that playing lacrosse was predictive.
I am not going to assume that without further investigation. I am rather skeptical of the robustness of predictive models in the absence of relevant causal relationships.
Here's a sport that is analogous to lacrosse in terms of racial makeup and financial demographics: golf.
See the correlation now?
Except that in golf's case, you usually see golf played around the country by rich white guys. Lacrosse is pretty much a northeastern thing, and when it's played elsewhere, it's normally at schools with the resources to host those clubs.
overfitting is a pretty well-understood and well-documented problem that we have statistical models to guard against - if this was just a simple case of overfitting, you'd think that Amazon of all organizations would be able to confirm and correct it.
Ah yes, the classic "we fed a bunch of real world data to an AI, said AI started finding patterns in the data and making decisions based on those patterns, so we shut it down because the patterns weren't politically correct" story. How many times have we heard this one?
>the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse
That doesn't mean that the "pinnacle of human achievement" necessarily is named Jared and played lacrosse in high school. It means that if you hire people who meet those conditions after filtering using X additional criteria, on average you'll get a more performant workforce than if you hire from the general population after filtering using X additional criteria.
The algorithm found an objective truth. Given the data provided and the question asked, the answer should be indisputable. The only reason to reject it should be because someone thinks the answer is incorrect, but that means that either the algorithm is incorrect, the data is incorrect, or the question was poorly phrased.
The "objective truth" for this specific data is that out of all people hired dudes named jared and/or lacrosse players were more high achieving.
That is where you get it wrong: the data is reflective of the bias in the current process, to remove that bias you can't simply reinforce the current behaviors via AI math-washing the same data.
That's silly. That would imply that the hiring process is actually very good at predicting performance, and it's biased towards hiring highly performant people named Jared and less performant people not named Jared, and it's not hiring highly performant people not named Jared and less performant people named Jared.
What are you talking about? I was saying that the biases we have today in our hiring cannot be eliminated by analyzing that data and hiring the same types of people that are "high performance", because you'll be reinforcing the biases we have, rather than eliminating them.
The algorithm found that rich white preppy kids tend to perform better at a certain job, and then generalized that to say that you should hire them over other candidates. We don't know what that job was (for something like ultra-high-net-worth wealth management, that might be way more true then coding), and we don't know whether the metrics of performance used are applicable to anything besides how much the boss liked them
I don't know about the Jared thing but I wouldn't be surprised at all if a high school sport ended up being a legitimate indicator of future job performance, even if not a very significant one.
I can agree that high school sports players are probably hard workers, sure. Clearly, hard work is a big part of doing well in life, since life has a lot of hard work (some of it is fun, too). However, you can't just blindly "follow the data" unless you have literally accounted for all of the data that is currently not collected, which we don't do because we cannot collect every metric relevant to this issue.
> However, you can't just blindly "follow the data" unless you have literally accounted for all of the data that is currently not collected, which we don't do because we cannot collect every metric relevant to this issue.
Why not? Isn't that basically what the status quo is for hiring? Why does AI hiring have such a larger bar?
In general, if you are a machine learning person and you are using a regression system that "explains" itself somehow, I highly recommend frequently looking at those explanations. Once you're excited that you've built a 90%+ model, crack open the hood of that model before you deploy it and do a sanity check on it. While ML algorithms can occasionally pluck some interesting insights out of the data, in my admittedly-limited experience I've also seen some models put together out of what is clearly just garbage, meaningless weights placed on attributes that can't possibly be that relevant carving out a hyperplane through hyperspace that can't possibly represent reality, despite all the efforts of ML models to hold out validation data and rotate their validation data and everything else to avoid overfitting.
Once these systems get complicated enough, it gets really hard to do that. Unless you are doing linear models or tree ensembles, it can be really difficult to get a clear picture of variable importance/effects. It's one thing to extrapolate locally (fine for linear models), but that has little to do with anything relevant when there's lots of higher-order interactions.
"Once these systems get complicated enough, it gets really hard to do that."
That's why I qualified it. Obviously if you're working in neural nets, you're not going to be cracking open that hood. But one of the dirty secrets of ML is you don't always need those techniques. There's the space of problems where linear regression and similar techniques are all you need, there's the class of problems where it doesn't matter what you throw at it you're not getting anything, and the "in between" where deep learning and such helps certainly has some interesting problems in the space, but it isn't necessarily the majority of problems you encounter in the wild.
The scary thing is that not being able to crack open the hood doesn't mean the model isn't complete nonsense. It just means you can't examine it.
- My model is accurate, for reasons I can't understand, verify, or externally justify
- My model is not accurate
edit: to be more clear on my stance, if a modeling technique is too difficult to justify or explain, it is not a good technique and should not be used, even if the results seem appealing.
The second one. It's important to have a clear picture of biases and how your model could fail to generalize. In the case of social data from a deeply biased society, these issues can be seen a mile away.
Orchestras had a male/female diversity problem until they started doing "blind" auditions where you could hear the music but not see the person playing the instrument. They even went so far as to reschedule an interview if they could figure out the gender based on something else e.g. heels clicking on the floor.
The outcome? More women started appearing in traditionally male instruments positions e.g. the French horn which for a long time was considered a "man's instrument"
You could do something similar in tech where one group performs the coding interview under supervision (to make sure it's actually the person to take the test) and another group reviews experience etc but without knowing the gender of the candidate.
Granted, people may try to hack this system by using back channels etc but I would argue it's better than nothing.
PS My favorite back channel approach was a college dean who have the same letter for all grad school reference letters but would use the student's first name for a positive reference vs last name for a negative reference.
e.g.
- "Susan is fantastic" == positive
- "Miss Jones is fantastic" ==
The "blind orchestra auditions" story gets repeated a lot, but the study it originates from did not find any statistically significant effect (and even so the ultimate outcome was that men benefit from blind auditions, not women).
I've been on the equity side of the fence for a long time now. My conviction comes from the fact that I (as very-low-but-not-poor middle class) got a chance to go to a "wealthier" high school and that made all the difference in my life. There's such a huge gap in confidence, persuasion, and even believing in what one can realistically achieve based on who you are surrounded with.
My college on the other hand had a mix of students from all sorts of backgrounds and that's where I first hand saw the huge difference this makes. Even those with good grades came across as extremely lacking in confidence. A variety of other factors then adds up to completely mask their talent.
And because there's not a single interview (at least in our industry) that doesn't look at your communication and other soft skills, these students mostly hung back and didn't get the same internship or jobs that I could get. And I was in the exact same social background as them.
These differences are now extremely huge after about a decade and a half. That's why I'm convinced that comparing two people works well only when the platform they are operating on is comparable as well. Otherwise, you are not-quite-but-close-to comparing just the platforms/social strata.
I'm happy that the story played out well for you. Did you get that chance to go to the "wealthier" school from a process borne of equity, or one of equality? I'd imagine the latter is something like, you tested into and maybe got a scholarship for the school regardless of the means of your family, and the former something like, you were accepted into and maybe got a scholarship for the school regardless of your test score based on some immutable characteristic of your family.
One of the core tenets of classical liberal thought is that children should not bear the sins of their fathers, and I think from that stems a lot of support for public schools acting as an equalizer between children of higher-achieving and lower-achieving parents; because whether on the equality side or the equity side, most people agree that being born to a poor family, or one that does not value education, should not determine the entire path of your life, start to finish.
But your view seems too pessimistic; from your belief that two people cannot be compared unless they are coming from an equal footing, it stands to reason that you also believe that there are basically no individual differences, only group differences. And that individual effort matters little in light of their background.
Should we hire as a salesperson someone who lacks confidence and interpersonal skills, or a mason someone who is physically weak? Sure, nobody chooses to have a stammer or be asthmatic as a child, and those differences do compound over time and you end up with someone who finds it hard to make a sale or lift a concrete block. And I believe in giving people a chance. But at some point, individual efforts should matter.
you benefited from your spot in what you describe as a wealthy high school, and we read into that better teachers, more extra programs, etc. (not a challenge or a criticism) in general though (i.e. not in public school), taking a spot in a program denies that spot to somebody else. The human capacity for a sense of fairness and rooting for the underdog on the one hand says "great, it's nice that it happened for you!". But in the types of equity/equality decisions that are being made in general, what happened to the other person who didn't get the benefit you got? what if she was "better qualified" or "more deserving" or "would have benefited society at large more"?
Honestly... why bother though? Whatever process you put in place to ensure diversity will be scrapped as a failure if it doesn't produce a very specific outcome (say, 50% men and 50% women). The only process that will be considered a success is one that leads to a particular outcome, so why not stop with the whole song and dance and just mandate the outcome and stop playing games with the process to get there?
I don't think so. They're saying, just mandate diversity, rather than trying to develop complex processes that are judged on how diverse their outcomes are. Which is a fair point.
But, the piece I think OC is missing is that another worthy objective is to reduce bias, which may or may not lead to diverse outcomes. So, that's why it would still be worth doing something like a double-blind coding test if your objective is solely to reduce bias.
If your objective is to boost diversity, then OC is right that a double-blind coding test is a complex route to get to that outcomes and that a different, more direct and explicit route might be better.
you're saying "do these things to get better" and he's saying "doesn't matter because": the people who most frequently say they are against bias and they are in favor of diversity in fact don't care about either, what they care about is evening out the employed %ages of (choose euphemism) underrepresented, vulnerable, at risk, traditionally overlooked, etc. etc., with no regard for measuring qualification, skill, productivity, preparedness, etc., and they also want equal pay regardless of the value of the work being performed.
(people who become wealthy generally have skills that are highly leveraged over many customers, so blockbuster films, popular music, ubiquitous cellphones, disruptive X, etc. But shouldn't childcare workers earn as much? Our children are precious, our children are the future... but children need full-time a very high adult-to-child care ratio, and looking at traditional societies essentially half the population has made excellent childcare workers and the other half has been willing and able to do a bang-up job in a pinch, so while it's vital to each child, it's not exactly a skilled task, and will never command a high salary because there isn't money to pay for it)
> Whatever process you put in place to ensure diversity will be scrapped as a failure if it doesn't produce a very specific outcome (say, 50% men and 50% women).
Any increase or industry average is the goal usually in my experience. Never 50%.
I think playing an instrument is easier to do this kind of interviewing for than software development. The latter is mostly about communication, whether it be by speech or writing. Those are things in which gendered signals are found all over the place.
I would say it's actually the opposite. Most comms in software engineering is async/textual, and the work we produce is also textual. The takeaway here for me is maybe there is something to be said for doing an all-text interview.
And yet my brain functions at least 100x faster than I can speak, leaving me to unsnarl the proverbial traffic jam in my head as the rate at which speech can communicate ideas lags behind the vast decision trees quickly generated in my thoughts.
It's much more effective for me to write something that's both clear and persuasive than trying to express the same idea with synchronous, expository speech. For full disclosure, I was trained as a writer.
Whenever the interview process includes a writing component, I invariably ace them. I'm a software engineer.
Also, if I want to know how good someone is at playing a musical instrument, I can usually get a good idea from just a few minutes or even seconds of hearing them play. There's no similarly fast way to judge how well someone will perform in a programming job. (Or at least if there was, hiring would be a lot easier.)
Andrew Gelman had a good post[0] about the blind orchestra auditions:
> I think they’re talking about the estimates of 0.011 +/- 0.013 and 0.006 +/- 0.013. To say that “the impact . . . is about 1 percentage point” . . . that’s not right. The point here is not to pick on the authors for doing what everybody used to do, 20 years ago, but just to emphasize that we can’t really trust these numbers.
Along the same lines, one recruiting startup did a voice modulation experiment[1] to test the effects of masking gender:
> After running the experiment, we ended up with some rather surprising results. Contrary to what we expected (and probably contrary to what you expected as well!), masking gender had no effect on interview performance with respect to any of the scoring criteria (would advance to next round, technical ability, problem solving ability). If anything, we started to notice some trends in the opposite direction of what we expected: for technical ability, it appeared that men who were modulated to sound like women did a bit better than unmodulated men and that women who were modulated to sound like men did a bit worse than unmodulated women. Though these trends weren’t statistically significant, I am mentioning them because they were unexpected and definitely something to watch for as we collect more data.
Australia did a study[2] where they hid gender on resumes for high level Australian Public Service positions:
> What we found is that de-identifying applications at the shortlisting stage of recruitment does not appear to assist in promoting diversity in hiring. In fact, in the trial we found that overall, APS officers generally discriminated in favour of female and minority candidates. This suggests that the APS has been successful to some degree in efforts to promote awareness and support for diversity among senior staff. It also means that introducing de-identification of applications in such a context may have the unintended consequence of decreasing the number of female and minority candidates shortlisted for senior APS positions, setting back efforts to promote more diversity at the senior management levels in the public service.
I notice a lot of detracting responses here, but most aren't saying, "You're wrong." They're saying, "It's wrong of you to express these things unambiguously." Why is it that we let the urge to pretend gender and sexuality aren't real prevent us from addressing the problems we have with those matters? It's not an adult way to act.
> if someone uses a woman emoji she is definitely a woman unless there's a specific reason to do so.
I'm a man, and I sometimes use woman emojis because I like the look, or because it's fun. Do either of those count as "a specific reason to do so" in your framing? If not, consider how well your observations actually generalize to the world, and thus the universality of your statements.
> i noticed it was really easy for me to pick out that the one masked example was not a normal man, i'd have guessed either gay male or varbie gone too far.
Perhaps that was because the article clearly labeled which one was the original woman's voice and which one was modulated to sound like a man? There's an easy way to test this: gave the interviewer guess the candidate's gender and see if they can guess consistently correct.
that's true but if you listen to the two samples it's very clear the voice is not a typical man. but agreed it would help to do a blind test and see if people could pick them out.
not sure what you mean? my only point is if there's an unconscious bias concern it's pretty hard to accurately address in situations where people don't just play music.
meh, the way you're talking about the things that women do more comes off as pretty condescending
"I feel" / "maybe we should" are not weasel words, they are phrases to start a conversation with an acknowledgement that you're not perfect
"effusive praise for pedestrian situations" what does this even mean, different people speak more emphatically about things
"don't get me started on emojis" why not?
you're not saying anything hateful or whatever, but idk, if you were having this conversation with me in real life, I would come away with the impression that you do not take women very seriously
i mean "average man". the modulated example sounded a lot like the stereotypical "gay voice", how is that observation worrisome? since that voice is intentionally not like a regular man i don't see how it's problematic.
Stereotypical gay voice is unintentional mostly and not reliable to predict sexual orientation.[1] Typical is the most neutral word for what you meant probably.
What definition of normal do you operate on? Surely it’s a minority of people, compared to the number of people who do not identify as gay. In sociology this is called social deviance, a feature of human variation
Men are a minority of people. Being gay is more common than having green eyes. Social norms where most HN commenters live allow both. Social deviance means breaking social norms not being in a minority.
sure, not disputing that, i was using it as a synonym for "typical" or "common". that's a pretty common use of the word btw, google says it means "conforming to a standard; usual, typical, or expected"
i also wasn't talking about gay people in general, only gay people who use the stereotypical voice which is a pretty small fraction of them. i think you're reading too much into nitpicking that word instead of the other three paragraphs.
i mean if you listen to some of them that's the voice: woman's tone and inflection but pitched down closer to a man due to the effects of heavy androgen exposure. how's that "yikes"? the point is the modulated woman still sounded like a woman because there's more than pitch that sets mens/womens voices apart.
This sounds like the start of a chapter in "Weapons of Math Destruction", but the "and then this bad unforeseen consequence occurred" half of the chapter is waiting to be written.
I agree that AI "could" help remove bias, not to beat a dead horse, but it is subject to its own training bias. Maybe I'm stuck in 2018 NN models, and newer model authors and trainers are far more aware of that. But this seems like an extremely risky place to put AI.
Not to be glib, but IMHO the biggest obstacle to diversity in the workplace is people who think "diversity in the workplace" is some kind of ideological conspiracy that is out to undermine them personally.
> "diversity in the workplace" is some kind of ideological conspiracy that is out to undermine them personally.
It doesn't have to be a conspiracy to be unfair. At my place the leadership team spend about 30 minutes every week complaining about the number of old white people who work at the company. I'm not saying this is true of all companies, but I think the issue is often that leadership pushes diversity in such a way that makes people feel excluded.
My previous workplaces straight up reserved headcount for "diverse" candidates. In other words, we prohibited white and Asian men from a segment of our available headcount.
There is no singular type of diversity initiative. Perhaps there are some that implement blind hiring to eliminate bias, or other methods of increasing diversity by eliminating bias. But I've never encountered one, in my experience there all just good old fashioned discriminatory hiring practices once you get past the effusive language.
Is anyone in power going to come out and say that it’s very obvious that you don’t need workplace diversity? Silicon Valley created trillions of dollars of wealth in just a few decades and it was staffed with 90% white, Indian, and East Asian men.
I sympathize a bit with things like Affirmative Action, but at the end of the day, a person (disregarding gender, physical condition, race, etc...) who earned it doesn't deserve to be displaced by others who didn't.
A. Created trillions of dollars of wealth doing what? Because it seems like Silicon Valley is well on its way to being the new "big tobacco" with the level of surveillance and advertising that has been normalized.
B. There's no way to know whether a more diverse Silicon Valley would have created $trillions + X dollars. Saying something was successful doesn't tell you anything about whether the alternative would have been more successful or not.
C. Wealth creation is a shitty metric. Ideally, all economic activity serves some purpose that makes actual people's lives better. This is why breaking windows to stimulate economic activity is considered a fallacy[1]. This is somewhat related to A, but saying "wealth was created so everything must be ok" is not very convincing.
If you go from 0 to $10T dollars in the span of 5 years, but are now 25 and soon to die of cancer due to pollution caused by generating that $10T, your wealth as well as the effort to gain it was fairly useless.
This proves that wealth as a metric is imperfect (like any other metric), especially if we are ignoring all other metrics (we're not). In no way it indicates that wealth is useless.
Still waiting to hear which metric is more useful. It looks like you brought up life expectancy, but it does correlate with wealth. Unlike wealth, it's is extremely hard to measure due to enormous lag.
Sure, Silicon Valley could have been MORE successful. But it's a bit like telling LeBron James that he might have been better at basketball if he had eaten more protein in his breakfasts. It's like, really? I'm happy with being really fucking good. I don't need to change up anything.
The problem with B. is that there are infinitely many alternatives.
Perhaps Silicon Valley would be even better off if all the developers had pet llamas at their disposal. But there is no llama industry that would lobby for that.
True but it could have been staffed by other people and created the same things if history had worked out differently. I don't think diversity should be the aim of a company but I also don't think the gender and race of SV employees has much to do with its success
That's not really the point. Any individual person will have their own biases and shortsightednesses, and basic assumptions about the world. People with similar backgrounds will tend to have similar biases.
> Do you think that's why folks are looking to add diversity to the workplace?
No, but they say obviously untested things like this: "diverse companies are more profitable and more likely to achieve long-term growth as an entity and in their industry." -- Marketwatch
If that was true you wouldn't see the push for it, companies would be fine with their competitors missing the boat while they hired all the undervalued diversity. I would prefer honest messaging such as "Diversity may cost in the short term but we believe it is morally worth doing". Then we could have an open conversation about goals and methods and costs.
Exactly, which is like saying "eating a high protein breakfast increases athletic performance." It may be true, and it may be applicable to the average athlete, but even if such a study was proven conclusively, I would not expect the top athletes in the world to change their breakfasts because of it - they have already achieved an outlier result and changing something is more likely than not to cause mean regression.
The authors of the paper do make a few concrete points about the problems with using AI to perform assessments on candidates. For example they mentioned that an AI designed to perform a Big Five assessment on a candidate might be meaningfully impacted by the candidate wearing glasses or having a bunch of books in the background.
The vast majority of the critique is that to the extent anonymization works, it does not produce the outcome the authors desire. They explicitly ask for group based discrimination to be pre-baked into any sort of AI system to produce equity, not equality.
"First, industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of “bias” to considering the broader inequalities that shape recruitment processes. Pratyusha Kalluri argues that AI experts should not focus on whether or not their technologies are technically fair but whether they are “shifting power” towards the marginalized (Kalluri, 2020). This requires aban- doning the “veneer of objectivity” that is grafted onto AI systems (Benjamin, 2019a, 2019b) so that technologists can better understand their implication—and that of the corporations within which they work—in the hiring process. For example, practitioners should engage with how the categories being used to sort, process, and categorize candidates may have historically harmed the individuals captured within them. They can then begin to problematize the assumptions about “gender” and “race” they are building into AI hiring tools even as they intend to strip racial and gender attributes out of recruitment."
Further, it’s important to compare the outcomes here to the counter factual, which must involve lots of individuals making decisions that are not based on explicitly coded up rules… ie rules that can be reviewed for bias and improved on collectively.
Not sure how this helps me get the candidates in the first place. Something's filtering non-white-male, non-asian-male candidates before they even apply to our organization.
Is something filtering out Asian basketball players before they reach the NBA? Or may they just don't decide to become professional basketball players. Likewise among children who say they're interested in STEM, boys outnumber girls 3:1 as early as middle school [1]. The evidence doesn't indicate that they're being filtered out, they're not interested in the field in the first place.
The strongest plausible interpretation of their comment would count interest as a filter.
13 to 17 is older than middle school mostly. Preference at those ages is not evidence against social factors. The article said children lose interest in STEM careers as they get older. It referred to an MIT survey that found not having mentors or role models was a reason.
13 absolutely is middle school, and most 8th graders turn 14 during that school year. The point is that children have already exhibited the same disparity as we see in the tech industry by middle school. And I'm not really sure if they have a strong grasp on what the job really entails earlier than that. My niece wants to be an astronaut. She doesn't like math or science, but she wants to meet aliens and see their different styles of clothes and hobbies. If she were to change her stated interests in the future, I'm not sure if her underlying interests actually changed or if she just developed a better idea about what working in STEM really entails.
It's a loaded thing to say that girls "lose" interest in STEM, when it's unclear they even had that interest in the first place.
Only ~13% of CS grads are women last time I checked. So yes, the filtering is happening, and it's before the candidate even looks at the job advertisement.
Bad job ads? I believe there was some data showing that the "10x rockstar bla bla" style of ads tended to put off underrepresented minority candidates, in particular.
I've been involved in hiring since te late 80's in the IT sector. The only constant that was there from the start is 'we need to hire more woman', which was understandable as less than 10% of CS degrees were female despite decades of positive discrimination, but every university department and every cmpany wanted to be perceived as 'female friendly'.
It seems to be heading down a purely non-sensical direction. I am waiting for us to be told that who we find attractive is biased. So our dating/mating pools should be restricted to correct this embedded bias.
I will never forget. Some startup claimed they could find the best candidates for the job using "AI." They had a whole bunch of BS selling points. I think they even convinced a fair amount of companies to get on board too. I was thinking "how can people be so stupid!?"
I then found one of the "features" was that they have the managers/bosses take the exam. They then offer a package that can weigh candidates based on correlation to their boss. Well, that's what it did. The marketing pitch was something something, view if they're compatible with your culture, something something.
I then realized its genius. Charge a ton of money to do basic linear regression and spend 90% of your money on giving your clients an excuse to hire whoever they want without appearing bias. They would even appear "virtuous" because they let the "AI decide."
Just awful, lol.
Of course, I never got to see the internals and I don't want to slander but the "test the boss, correlate with new hire" was 100% real.
Well, that's literally what US colleges have been doing for decades. Basically outsourced IQ tests (since IQ tests during the hiring process are illegal).
IQ tests during hiring are illegal if that IQ test has a disparate impact against a protected class. (It turns out they pretty much all do, even those specifically designed not to, like Raven's Progressive Matrices: https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices)
A "totally not an IQ test" IQ test is safer, particularly if it is bespoke, because new/obscure tests are less likely to have been studied, and consequently, any forbidden trends in test outcomes are less likely to be discovered.
> IQ tests during hiring are illegal if that IQ test has a disparate impact against a protected class.
Are you sure about that?
That seems unreasonable and everything that I can find in 5 minutes of Googling suggests that they are legal-but-restricted, to cases where the skill/aptitude being tested is relevant to the job being hired for. (You couldn't use an IQ test to hire a fry cook, as an example.)
Hiring practices which discriminate based on attributes that would otherwise be illegal to consider may be permitted if those attributes are Bona fide occupational qualifications (BFOQ.) For instance, blind people can't be bus drivers.
IQ tests are verboten after Griggs v. Duke Power Co. As far as I'm aware, claiming a BFOQ exemption to this will fail, or at best, is unproven. IQ tests discriminate by race (even if unintentionally) and BFOQ exemptions don't apply to racial discrimination. Please correct me if I'm wrong.
The use of tests and other selection procedures can be a very effective means of determining which applicants or employees are most qualified for a particular job. However, use of these tools can violate the federal anti-discrimination laws if an employer intentionally uses them to discriminate based on race, color, sex, national origin, religion, disability, or age (40 or older). Use of tests and other selection procedures can also violate the federal anti-discrimination laws if they disproportionately exclude people in a particular group by race, sex, or another covered basis, unless the employer can justify the test or procedure under the law.
Similar language: One of the basic principles of the Uniform Guidelines is that it is unlawful to use a test or selection procedure that creates adverse impact, unless justified. Adverse impact occurs when there is a substantially different rate of selection in hiring, promotion, or other employment decisions that work to the disadvantage of members of a race, sex, or ethnic group.
Your understanding is technically correct but missing some nuance which is the heart of the matter. It's true the use of an IQ test can be justified. The important thing is that the use of an IQ test must be justified.
Due to the Supreme Court ruling, IQ tests are assumed to have a disparate impact. They are by default assumed to be discriminatory unless proven otherwise. A company must either prove the lack of disparate impact, or demonstrate a plausible connection to work performance (beyond "smart = good").
Most hiring practices are default OK, and a claim of discrimination must be proven in court. This hiring practice has already been decided in court, so it is default not-OK and must be defended. That difference is significant.
There's some great answers there that are summarized as "no they are not".
I am guessing you mean IQ tests discriminate by race in the sense that racial groups exhibit disparate outcomes on the tests. According to critical race theory, all disparate outcome is caused by racism, which would make IQ tests racist. Critical race theory is 100% wrong, of course, as are all the related intellectual disciplines that make the same false and unfounded claims. Unfortunately, this "invincible fallacy", as the great philosopher Thomas Sowell calls it, really seems invincible to facts, truth, and reality. It is a virus on our culture and societal consciousness.
IQ tests don't discriminate by race. They discriminate by IQ. That's literally the point of the test. Race has nothing to do with it.
> I am guessing you mean IQ tests discriminate by race in the sense that racial groups exhibit disparate outcomes on the tests
If you go to court trying to argue that your hiring and promotion system isn't racially biased and it merely exposed the truth of some racial groups being less intelligent than others, you're going to lose. It doesn't matter if you're factually correct, you're going to lose anyway and you'll probably be personally ruined for even attempting it.
Surprisingly we can find some agreement in what you're saying, but I strongly believe you should reword your initial claim to something along the lines of: IQ tests have shown disparate results along racial lines, and based on U.S. judicial precedent, you can't use them in the hiring process without justification.
If you argue IQ is unrelated to a job, could you not argue, in most cases, that college degrees are irrelevant to a job? There is definitely still massive disparate racial impact there. https://nces.ed.gov/FastFacts/display.asp?id=72
In fact, life has disparate impact. One of my favorite quotes on this subject is: "The tests are not unfair. LIFE is unfair and the tests measure the results."
This is a great, though long, video, on the history of mainstream academic views on racial disparity and how it should be handled: https://www.youtube.com/watch?v=Tx6TkC45y-A
I wish everyone would watch it. I will provide one small excerpt from it:
"a 10th century Muslim scholar noted that Europeans grow more pale the farther north you go, and also that the farther they are to the north the more stupid, gross, and brutish they are. However offensive this correlation between skin color and intellectual development may seem today, there is no reason in history to challenge it as an empirical generalization as of that particular time. Mediterranean Europe was more advanced than northern Europe for centuries beginning in ancient times when the Greeks and Romans laid many of the foundations of western civilization at a time when the peoples of Britain and Scandinavia lived in illiterate and far less advanced societies."
At no point in history have any two racial groups performed identically, and, arguably 99% of the time, the reasons have little to do with racism or societal injustice. In the above excerpt, there was certainly no racism causing the peoples of northern and western Europe to lag behind other civilizations of the time. Nor was there racism involved when these civilizations eventually rose in power and prominence. In both cases, skin color could be used to make correlations, but clearly skin color had nothing to do with causation. My point is: if a policy or procedure has disparate racial impact, it is not, by default, racist, discriminatory, or even wrong in any way,
I can see an argument against using IQ tests in hiring based on the belief that IQ is not directly correlated to job performance, but, you could also say the same thing about probably 90% of the things people use to discriminate between job applicants. And there is no need to invoke race in the discussion when making these arguments, or to use race as the forcing function to make it illegal.
This same type of logic is used to say election security reform is racist or discriminatory. Well, really, just about every law is racist if you measure racism by disparate impact. Laws against murder certainly are racist by this criteria: https://ucr.fbi.gov/crime-in-the-u.s/2019/crime-in-the-u.s.-...
But it's a flawed, broken, and dangerously incorrect way to think about discrimination, and I hope our judicial precedent eventually corrects itself to something more sensible.
This is really interesting to look at in the context of other hiring testing -- like Google's practice of asking silly abstract questions like "invert a binary tree" for jobs which do not involve substantial algorithmic work.
Google doesn’t hire an ordinary dev for one specific job, they allow internal transfers and other teams expect you to meet the bar for your title. They do more useless protobuf schlepping by hand than they should, but I’d be surprised if any job were 100% that and 0% algorithms.
Huh? What are you referring to? Students? Faculty? I've been both and haven't seen this ever. Or are you trying to claim that the SAT and GRE are IQ tests?
The SAT and the GRE are IQ tests. They have the same psychometric characteristics and reveal the same information about testees. They correlate with IQ tests at the same level that different IQ tests correlate with each other. Results from IQ tests explain very large amounts of the variance in SAT and GRE scores, and vice versa.
This is not a strictly necessary state of affairs - if large numbers of people who knew nothing about chemistry started taking the chemistry GRE, the strongest signal provided by the chemistry GRE would switch from "IQ" to "whether or not the person has ever studied chemistry".
But (1) the test would remain an IQ test once you ignored the very low scores of people who had no reason to take the test; and (2) in reality, people who have never studied chemistry don't take the chemistry GRE.
a PhD scientist in California was awarded a multi-year $5m USD grant based on similar principles, around 2015, through the University of California. I sat through her lunch talk to a group of researchers. The merit of the idea was that she could find under-represented demographics of young potential coders by matching various online activity with their resume, and assuming a lot of low-income education credits are not going to look very good on a typical boilerplate resume. I was mildly annoyed at the time.
There's a qualitative difference between using AI for sourcing—that is, finding candidates you would not find normally—and using it for assessment.
The impact of a biased source is attenuated by having other sources and by the rest of your assessment process. Perhaps you end up with a marginally "worse" pool of candidates at the top of your hiring funnel... but, frankly, it's already hard to do much worse than the mix of LinkedIn keyword searches, resume screens and ad hoc channels teams use today. Crucially, if this happens, the impact on any given candidate is minimal—a bit more competition that, if the rest of the process is any good, should be negligible.
The impact of bias in how you reject candidates, on the other hand, fundamentally cannot be compensated for in the remaining process. A rejected candidate is already out! Any bias in rejecting steps is going to be directly reflected in the sort of teams you can recruit.
Mixing together sourcing methods that are biased in different ways can cancel out the bias. Mixing together assessment steps that are biased in different ways compounds the bias.
Traditional resumes are pretty awful any way you look at it[1] and developing new ways of sourcing qualified candidates that get overlooked by the traditional approach doesn't create the sorts of problems that new assessment methods do, so the research you're describing seems unambiguously useful.
[1]: interviewing.io did a cool study looking at how resumes compare against the mock interview performance of users on their site. They just don't work well. This isn't a peer-reviewed study, of course, but, frankly, the data and approach are more compelling than some of the peer-reviewed research I've seen on the subject.
you make a couple of good points, regarding sourcing, and hint at compounding effects, which is probably a very deep topic here. I tipped my hand by indicating I was mildly annoyed, and failed to differentiate between perceived positives and negatives in the process. So I will try to fill that in a bit..
Let's start with aptitude/talent of the candidate. Certainly those with aptitude and/or talent might be hidden in the modern floods of resumes. People lie about skills, and supporting "automated investigation" might refute or amplify assertions. People also miss critical elements in describing themselves, so something similar would apply there. Your point that this kind of automation might increase pools of candidates, and thereby making a chance for some people currently overlooked, is consistent with the merit found in the research grant.
Is that the whole story however? Is there a shortage of skilled people doing paid computer activity? It is no secret that wage negotiation is ongoing and will essentially never be solved. Individual people grow and change, especially in the second decade of life. Industry conditions change, especially with web-related work. And, the industry itself is famously seeking bargains. Outsourcing is a constant on the Internet. There are many more people than jobs at any given time, but most people are not great fits or even able to do a job. So.. how to draw some conclusions there, about where "automated investigation and classification of human beings as workers" might fit in.
It is literally true that some large-scale business seeks to make replaceable work roles such that low-skill, low-commitment people can be replaced in large numbers on any given week. Wages and benefits can be cut and sometimes aggressively cut, amidst that churn. Is there a shortage of college-educated adults who are willing to participate as "churn" elements, also known as gig-work? Perhaps, but for stable social lives, reliable income, stability to raise a family, retirement and medical benefits, gig-work acts like a chemical solvent, literally dissolving the bonds, seniority and commitments needed for longer term stability to grow. More comments about wage working in the USA and elsewhere could be made, but overall this part would fit into the "uses of the technology" side, not the technology itself per se.
Another angle worth mentioning is the difference between developing skills in a workplace over time, creating the environment that sustains skill development, and seeing the benefits of skill development return to the skill holder, rather than the corporation. This worker-side of worklife historically overlaps mentors, ranking, seniority and self-determination of the skill holder. Almost none of the "automated investigation" lends itself to this side, at first appraisel. If there is someone who has unfound skills in rigorous or rare aptitudes, there would be some context to that since it takes time to grow stronger. That sort of thing would almost certainly be included in a resume.
Overall, I claim that the "automated investigation" of employee aptitude strongly aids a "race to the bottom" of lowest wage for highest skill, and does almost nothing to build and retain skill over time by the skill holder. My reason for being "mildly annoyed" probably has more to do with systematically declining to engage mid-level skilled workers in mid-life in mid- to high-cost of living societies, and not so much to do with finding and engaging people who are overlooked by the current system. Since I do not have the problem of thousands of over-stated resumes for ordinary roles to filter through, the filtering side of "refuting false claims on resumes" is not a problem for me, though this technology is certainly primed for that.
Yea. I think the hiring processes is broken in a lot of ways. I also think trying to get AI to solve it or adding even more layers into the process is only going to make things worse.
That's not entirely true. "Mr. Smith goes to the brothel every Tuesday" can be defamatory yet true since it omits the crucial details that a) Mr. Smith is the garbageman and b) Tuesday is garbage collection day at the brothel. Of course, the defamatory aspect is that you are very clearly implying a false statement, and if the implied statement were true, it would of course fail to be defamatory.
It is a very grey area. In generally you can say anything you can proof to be true. But are you prepared to uphold your proof against a legion of expensive lawers in court?
(what is truth after all)
So if you loose, things can get ugly, so most people try to be on the safe side and keep their mouths rather shut (except for pseudoanonymous online rants).
> Das Behaupten wahrer Tatsachen ist demgegenüber grundsätzlich straflos.[76] Ausnahmsweise kann dies allerdings nach § 192 StGB eine Beleidigung darstellen. Voraussetzung hierfür ist, dass die Behauptung in einer Weise erfolgt, die ehrverletzend wirkt. Eine solche Formalbeleidigung kann beispielsweise dadurch begangen werden, dass eine Tatsache in einem herabwürdigenden Ton behauptet wird.[77] Dies trifft beispielsweise auf den Publikationsexzess zu. Hierbei wird eine Tatsache, etwa das Ausstehen von Schulden, öffentlich mit anprangernder und ehrverletzender Wirkung zur Schau gestellt.[78]
(Google Translate can probably help, if you don't read German.)
he didn't mean "I don't want to technical-meaning-of-slander", he meant "I don't want to badmouth perhaps unfairly since I probably don't have all the facts"
It has honestly been so long I can't say for sure. It was early 2010s.
Seems a lot of the marketing material is vague and you need to request a demo. Looks pretty silly, regardless. Play a bunch of "games" as part of the interview? I bet HR loves it tho, lol.
Maybe every boss deserves to be matched with the team that makes all involved most-likely to mutually-succeed, even if it's a peculiar & unrepresentative bunch of people who can only work with each other.
And further, if that best-effort assemblage still consistently malfunctions, the boss, & the whole team, should be encouraged to find other roles and teams that have a better chance of succeeding.
My question is why get so fancy? Why not just use a racial/gender quota system? Just say "we want X% from demographics A, B, and C" and then hire the best you can from those groups and call it a day?
That way you can just dial in the exact amount and types of diversity you want.
Because that sort of hiring produces less-qualified diversity hires. If you're running a for-profit business you're better off hiring from the largest pool possible.
If you want to avoid the less-qualified you can either try harder to hire better, which is slow and expensive, or you can offer more, which is expensive but works.
Hiring without regard for quality but paying the same seems like it would cement racist (or at best anti-diversity) feelings in the rest of the workforce and leave minorities without an available sense of accomplishment.
One real danger is that there are a lot of data points that correlate with race and gender. Machine learning can become real good at inferring race and gender from what would appear to people to be unrelated variables but that, in fact have a small correlation. In the end, it is easy to use “AI” to whitewash your biases without explicitly targeting protected characteristics.
Anything that doesn’t produce the desired outcome is a nonstarter. There will always be some new group to advocate for. If such a group does not exist, one will be generated through intersectionality.
This AI is being marketed to companies looking to improve their diversity. If it acts as you expect, then they just go out of business. I think the more likely scenario is that it achieves the desired outcome not by eliminating bias, but by inverting it, as that is more likely to result in measurable success from the customer perspective.
My concern would really be that it ends up selecting for normalness. I could easily see it biasing against the homely, obese, people who struggle with mental illness, or the just plain weird. Would anybody check to make sure it wouldn't fail that way like they would with a more high profile bias like gender? Seems unlikely to me. I imagine some would even regard that as a positive feature.
> They say it is a dangerous example of 'technosolutionism': turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.
I definitely agree with this sentiment, however I think it's also important to remember how bad human reviewers are. I think a lot of times, because it is harder to collect data measuring the effect of human systems, we focus on how bad AI systems are. At least with an algorithm you can:
- try to identify biases via testing
- back test changes to see how those biases change
I think given the choice between equally biased human and algorithmic systems, I would have more faith that the algorithmic system could be meaningfully changed in the near term than a human system.
It's more likely to work the other way around. Suppose you train your system to try to match human hiring decisions, and some of your hiring managers have an unconscious bias against members of group X, enough to make a small difference in probability of hiring. Training the model will probably find that bias, and the effect may be that belonging to group X, alone, will have a negative weight in the model. So unconscious biases may become explicit.
There are ways around this: test the model by presenting credentials that are identical except for (is in group X) vs (is in group Y) and looking for major discrepancies.
AI is probably the only thing that will be able to help humanity clean up the mess that it’s still noisily making. I’m talking about real AI, not the Mickey Mouse data science buzzword that keeps getting promulgated by C-suite crayon eaters.
Doomsayers can stir up all the FUD they want about AI, and point to works of fiction as their gospel. I, on the other hand, welcome a future where we are fairly governed and managed by our AI overlords, rather than exploited by “fellow” man.
This sort of thing needs more advertising and more noise. People really need to grasp that AI gets biased based on the training sets you provide. You can't just blindly trust the results.
AI/ML is a poorly understood, but very well-hyped technology. Much like blockchain, it's being falsely promoted as a kind of panacea for all of mankind's problems.
`Yes, AI can boost workplace diversity in a number of ways. For example, AI can help identify unconscious bias in hiring practices, help managers develop more diverse teams, and support employees in feeling comfortable and included at work.`
In another case,
After an audit of the algorithm, the resume screening company found that the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse
which reads like a comical stereotype of a young male New England WASP; honestly kind of impressive.