Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems a system that produces different outcomes for two individuals differing only in their racial identity, or other "irrelevant factors", is unfair. The question is, of course, what factors actually are relevant.


> It seems a system that produces different outcomes for two individuals differing only in their racial identity, or other "irrelevant factors", is unfair. The question is, of course, what factors actually are relevant.

I'm interested in a little thought experiment:

1. for the sake of this algorithm there are two people: a white person and black person

2. race and other "irrelevant factors" aren't recorded or used

3. these people are except for race, otherwise identical

4. how can the algorithm discriminate between them?

It seems tautological that lacking race as a feature to discriminate with the algorithm won't be able to successfully discriminate, right?

So how does the discriminatory information somehow bleed into the system such that it negatively affects two otherwise identical (or practically so) people?

I'm not trying to defend the algorithm as the perfect arbiter or anything. I'm just honestly a little confused. Is it that the irrelevant factors are actually recorded somehow, but shouldn't be?


The author explicitly acknowledges that relevant factors correlate, in fact he suggests race as a factor will creep back in even if it is excluded as an independent variable.

He is thus suggesting that even given genuine group differences, different group outcomes are intrinsically unfair if the grouping happens to be race.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: