Either women and men as groups are fundamentally equal and there are no intrinsic differences between the two groups, neither in the averages nor distribution. In that case women should not need any special treatment to advance in the same careers. Any imbalance is either the result of bigotry within the field of work or prior to that at the gatekeepers (i.e. college, K12, family).
Or women and men as groups are different, either in the averages (i.e. on average, X are better at Y than Z) or in distribution (i.e. men and women are equally good at or interested in X but one group has more outliers on both ends of the scale). In this case gender parity can only be gained and maintained artificially because a perfectly fair unbiased selection would always result in a skewed balance.
Some studies seem to suggest the latter. We know this to true in sports (which is why e.g. the Olympics are strictly segregated by gender). It just becomes a political problem as soon as we try to propose that this hold true outside the pure physicality of competitive sports.
It's en vogue to treat humans as brains in a vat as soon as we discuss these issues but I'm not convinced this isn't the same fallacy as economists assuming pure rational actors and physicists assuming spherical cows in a vacuum.
EDIT: Obligatory note: gender discrimination is a thing, not just in tech. Corrective measures may help with that. But if we don't know which one of the two premises holds true (or rather to which extent each one is true in this specific case) we don't know whether we can reach both gender parity AND close the gender pay gap, at the same time.
I think part of the problem is that there is no easy way to tell apart the two possibilities you describe, and corrective measures are not uniquely warranted when the "correct" ratio is 50/50. In fact (perhaps counter-intuitively) corrective measures are probably most warranted when one gender is better than the other. I don't mean what you might think I mean: I don't say that because the ratio ought to be brought to 50/50. What I mean is that if the "correct" ratio is, say, 60/40, you need corrective measures to make sure it does not increase.
The reason why is that this is not a problem with stable dynamics. If you know that Xs are better at Y than Zs, then upon meeting an X you will mentally assign a higher prior "competence" to them than to Z. As you evaluate them, the prior will eventually be replaced by a fair assessment of their skill, but it will never disappear completely. The end result is that at equal competence, you will hire more Xs than Zs.
Now, if, at equal competence, you hired as many Xs as Zs, you would have a ratio of 60/40. But your knowledge of this ratio gives you a prior that favors X, which means you do not do that. Instead, you will get a more skewed ratio, like 65/35. Seeing the discrepancy, Zs will believe that they are being discriminated against, and this will disincentivize them a little from pursuing Y. The Z applicants' quality will decrease, and the gap will widen. So the only way to really get the 60/40 ratio, paradoxically, would be to make sure that evaluators believe that it is 50/50... but then they might feel compelled to compensate for what they believe must be their own bias!
Anyway, it's a really complicated problem, and every side seems to be hoarding their own spherical cows about it.
Interesting. But that would still mean you need to determine the "correct" ratio and adjust the corrective measures to make sure they don't accidentally hypercorrect.
Additionally there's still the problem that we're talking about spherical cows. Hiring policies don't exist in a vacuum. I'll talk about squares and triangles to keep it abstract.
Hiring only "the best of the best" is a quite popular strategy. Let's say the "shared average" model is correct and squares are overrepresented near the average of the applicant pool while underrepresented at the top and bottom. The "correct" ratio might be 60/40. The actual ratio for the top (and bottom) percentile might end up being closer to 90/10.
Now, depending on the size of the pool those 10% squares of the top percentile might be enough for one company to maintain its ratio while only hiring "the best of the best", or even several companies. But at some point companies will have to either sacrifice its ratio and hire more triangles or sacrifice its standards and hire squares who are weaker than some of the triangle candidates.
Note that so far I haven't even been talking about discrimination or perceived biases. This is what happens if we have perfectly rational actors with perfect knowledge of the market simply enacting the policy "only hire the best of the best" with the restriction of "try to maintain a ratio of 60/40".
You could argue sometimes going for the weaker candidate is worth it to combat the chilling effects of perceived biases. But it should be obvious why it's naive to expect any company to choose so voluntarily when it means the competition that doesn't follow the rule will get more better candidates.
And so far we're talking about a single property that is split pretty evenly across the greater population (even if the hiring pool might be unbalanced). What if 60% of the population is yellow, 30% are blue, 9% are green and 1% are red? Diversity programmes often aim for equal representation of minorities, not just for proportional representation. So that means you want 25% red. But you also want this for both sets of polygons, so at a 50/50 ratio you need to try to hire 12.5% red squares, 12.5% red triangles and so on for all colors. Next imagine 20% of squares and triangles also have rounded corners. And 1% changed their number of corners at some point in their lives.
This is clearly a field that needs unexcited empirical studies. Yet gender studies are seething with ideological bias, taking the conclusions a priori as self-evident. And critics are lumped in with those who are ideologically opposed.
I have no idea what the actual distributions look like. I don't know what the correct ratio would be. I know sexism exists. I also know plenty of women are put off by far more benign aspects of the field. I also know plenty of men are put off as well.
For all its flaws the Google Memo got one thing right: appealing to emotion (what he mistakenly called "empathy") is not the way to further our understanding of the situation. Personal anecdotes are heartwarming or gut-wrenching but anecdotes are not data. When scientific results don't match up with anecdotes that shouldn't mean the science is wrong. It just means "this warrants further study". Maybe the science is wrong, then we can find out how that happened and do more science while preventing the same mistakes. But maybe the anecdotes as important as they may feel are outliers. Or maybe both are true and there are problems we need to address but they distract from the actual cause.
It's not like climate change. With climate change if climate change is wrong, by addressing it we just end up wasting a lot of resource to make the world better nevertheless. With identity politics (assuming companies actually "lower the bar" for minorities to "fix" their ratios), if we're wrong, we've ended up treating a lot of people unfairly just to end up with numbers that look fairer.
I think we should continue encouraging women and minorities to get into tech. I also think we should combat sexism and bigotry in the industry. But I also think we should not forestall the conclusion when trying to understand the root cause of these disparities.
Either women and men as groups are fundamentally equal and there are no intrinsic differences between the two groups, neither in the averages nor distribution. In that case women should not need any special treatment to advance in the same careers. Any imbalance is either the result of bigotry within the field of work or prior to that at the gatekeepers (i.e. college, K12, family).
Or women and men as groups are different, either in the averages (i.e. on average, X are better at Y than Z) or in distribution (i.e. men and women are equally good at or interested in X but one group has more outliers on both ends of the scale). In this case gender parity can only be gained and maintained artificially because a perfectly fair unbiased selection would always result in a skewed balance.
Some studies seem to suggest the latter. We know this to true in sports (which is why e.g. the Olympics are strictly segregated by gender). It just becomes a political problem as soon as we try to propose that this hold true outside the pure physicality of competitive sports.
It's en vogue to treat humans as brains in a vat as soon as we discuss these issues but I'm not convinced this isn't the same fallacy as economists assuming pure rational actors and physicists assuming spherical cows in a vacuum.
EDIT: Obligatory note: gender discrimination is a thing, not just in tech. Corrective measures may help with that. But if we don't know which one of the two premises holds true (or rather to which extent each one is true in this specific case) we don't know whether we can reach both gender parity AND close the gender pay gap, at the same time.