IMO, no one wants to acknowledge simple facts for fear of retribution/being labelled racist, instead we keep dancing around this issue forever getting more and more ridiculous.
Also, trust me i can talk about this because i'm not white...
fact - people prefer people like them. Not even consciously, this is basic shit hardwired into us. I don't blame white men for being subconsciously biased to hiring white men, literally any other group would do the same. sure we can try to fight that bias, but its not at all evil or wrong to have that bias, only natural.
fact - taking an "agnostic" approach the way science does, of course the algorithms will reflect "biases". if men are statistically more likely to be programmers, or black people are more likely to commit crimes (STATISTICALLY), then the algorithm will pick that up. They are biases sure, but also statistical realities.
Now we can debate whether we should actively engineer algorithms to fight these "biases" on a case-by-case basis (for example, focusing more on women might be a win if you can find talent no one else can), but there's no reason to start pointing fingers at the "evil white guys" on top who planned this from the very beginning... it's just more stereotyping.
hypothesis - she wrote this crap to gain publicity.
> if men are statistically more likely to be programmers, or black people are more likely to commit crimes (STATISTICALLY),
I think you meant black people are more likely to be convicted of crime. The problems with crime 'statistics' is that on the surface, it all seems coldly scientific, yet they are generated and derived via very biased, very human, very unscientific processes - there is a lot of bad data. The ACLU did research that showed that there is no statistically significant difference in the possession of weed between white and black people, yet more black people are convicted[1] for possession.
Here's a mind experiment: after watching this YouTube video[2], how skewed do you think the statistics for white female criminals (bike thieves) vs black criminals would be?
Yeah, i don't really know enough to argue that. You're probably right about the convictions.
I just think we need to be able to TALK about these issues, so that when a real expert looks at those statistics they can get to the truth of the matter, and say that truth whether it is or isn't politically correct.
The law says racism is illegal in certain situations, and society says racism is undesirable in most situations.
The law and society aren't claiming that racism is statistically non-optimal -- in fact, there are lots of things more optimal than status quo that many people would find totally horrifying.
If are widely replacing human systems with AI systems, I think this is a legitimate concern.
I really depart from the article in two areas:
1. The AI will inherit the biases of its creators. This is possible but far from guaranteed. And relatedly, inclusivity of the development team guarantees nothing regarding the goals of the system.
2. Criticising the people who are warning of the problem and trying to do something about it. This is related to the AI control problem. There is no switch that can be flipped that will prevent AI systems from Bad Ideas. It's not that we just aren't flipping it to preserve our chokehold on captialism. Implementing morality in AI systems is a genuinely monumental problem. And the people who are doing something about it are behaving very altruistically.
Agreed! If we did nothing to correct inequalities in society, the biggest and strongest would rule over all - so yes we must have our own values and stick to them.
And with AI, we must make these values explicit, which is very difficult to do - i agree this is a very important problem to solve, and the people doing it should absolutely be rewarded.
Honestly, reading the article again after your summary i found it very reasonable :p I think the headline just ticked me off.
No one is stopping other people from getting in on the debate, they absolutely should (and I'm sure there are roadblocks in their way, and people who really are racist). It just feels wrong to implicitly all the "bad white people" for that. Blame those who cause the problem. Otherwise we are back to stereotyping.
"The law...". Whose law? "Society says...". Which society? "Implementing morality". Whose morality?
Also, why wouldn't you want things to be optimal, depending what what they're optimizing for? I thought optimizing was the exact point of machine learning and AI.
By morality I was thinking along the lines of socially accepted behaviour in the contemporary United States, and mainstream Christian morality which isn't the official state religion, but in my opinion the basics of it are taken as a given in politics, government, media, and academia.
Overall, I'm referring to mainstream anglo law, society, and morality.
> why wouldn't you want things to be optimal, depending what what they're optimizing for?
The reason is that when you express your goals, you don't fully understand what the consequences will be. There may be consequences that are totally repugnant to you. The child-story level version of this is where you get a genie which gives you your wishes in a horrible way: i.e. I wish to be the richest man in the world, and so the genie kills everyone else. I want a nice big house like my parents, and so the genie kills your parents and you inherit. Et cetera.
Telling the computer to do what you want is notoriously difficult for simple imperative programming, to the point where many people think a large fraction of the population just isn't up to it intellectually, and if you need proof of this you can search for fizz-buzz interview stories. Setting up goals or incentives for a system that behaves in a way the you can barely understand is even more difficult.
If you change the label of the datapoints, say black to apple and white to oranges, you'll change racism to fruitism. It's unlikely that a racist system would do that.
I do agree with the article that feeding biased data will result in a biased system and we need to be aware of that. But calling it racist and sexist is sensationalism.
The last person I met from Google's machine learning group in search was female and Chinese. The big name behind machine learning is Andrew Yan-Tak Ng; he was a professor at Stanford and is Chief Scientist at Baidu now.
They're complaining about Nikon cameras not recognizing Asian faces properly, and this is discrimination? Nikon is a Japanese company. Headquarters is in Tokyo. The CEO is Kazuo Ushida.
My experience as well. I'm from Seattle and have met (at my office and others) and overwhelmingly large number of asian, and especially female asians working in data science and machine learning.
I think their point was the training data may have a lot of white guys. I don't know what Nikon used but if they just googled the web for images they'd probably end up with quite a lot of white subjects.
The reason why this is garbage, is that there is not a single naturally occurring domain in society in which groups can be found to be represented equally. I'm pretty sure this is true in nature as well. So one can literally at their sole discretion, analyze any area of life and make statements like "systemically oppressed this", "unequally represented that". It's like staring at an ink blot and being asked what you see.
I have the nagging sensation that if were up to todays hyper-sensitized media on how society should look and function, we would all be grey globs in a grey world, devoid of any differences.
The more depressing truth? Striking these chords are an absolute goldmine for ratings and clicks. Everyone is naturally curious about how they might be currently oppressed or disadvantaged, it plays to our instinctual tribalism.
So please, realize you can do anything you want in this world, and don't be seduced by hate and bitterness from some writer sitting in Soho that has a click-quota to meet this month.
This article makes a perfectly valid point- AI is only as good as the data you use to train it. If you feed it bad, biased data, then the AI will behave in bad, biased ways.
These biases can be major (no Amazon delivery to black neighborhoods) or minor. I'm reminded of a gaming podcast I heard (can't remember which one) where a guy recounted watching a female journalist try VR goggles that couldn't detect her eyes because she had mascara. Apparently no one making the headset had tested the effects of that kind of makeup.
The article is right. If we are serious about creating products that revolutionize everyone's lives, we need to involve more kinds of people. Our perspectives are limited. We can't understand everything. That's the point of having a diverse team. Like Ben Thompson says, there's a very strong business case for diversity because "You don't know what you don't know."
The article begins by conflating algorithms and training data and that becomes the "sticking point" in the reader's mind even though she later clarifies this fact soon after. This is not a surprise since this obfuscation helps to back the sinister and prosecutorial tone of the piece.
While I agree that the article raises an important point, I don't really see how more diverse development teams would have fixed any of the problems raised.
This article falls in the same fallacy as a lot of more postmodern social science articles.
The whole point of AI and machine learning is to find things that are not immediately obvious but backed up by the data.
The author of this article is suggesting that if the conclusions of this research are politically unfavorable, then there has to be bias/racism/sexism somewhere. race/gender/socioeconomic agnostic.
Which runs immediately counter to both machine learning and research in general. "Dang it, run the numbers until it supports the conclusion I support."
It's like the author doesn't understand the basic premise of machine learning or research. If the datasets are restricted in some way unfairly, that's something to be looked at. Algorithms are generally fair and unbiased.
"In the United States, this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less."
More policing reduces crime. The author seems to think that people living in these poor, nonwhite neighborhoods would rather see the police resources go to wealthy, white neighborhoods. But the studies show that minorities and people living in high-crime neighborhoods mostly do approve of police. There is a lot of cognitive dissonance here: is crime reduction through more policing in poor nonwhite neighborhoods a right goal despite sometimes justified skepticism of the police or not?
Its a combination of two separate problems- the skewed demographics of software engineers and the continued requirement of an intuitive understanding of the problem AI is being developed to solve. If you're building an AI system to help police predict crimes, you want to have a detailed enough understanding of the places its going to be applied to know what the impact of its predictions would be and how to shape the system to actually produce a desired result. Like with UX design, going from a specification to a logically valid implementation isn't necessarily going to produce results people actually want.
Its much the same way as, say, startups trying to market products to new mothers or highschool students have a hard time because they can't "eat their own dogfood" if they're all late-twenty-something white men. Huge markets lie unserved, I'm sure, because they're demographics that don't tend to produce software engineers or entrepreneurs.
> The reason those predictions are so skewed is still unknown
Well, I bet the reason are historical statistics of recidivism, which presumably were used to train these algorithms.
But hey, this is NYT, they cannot simply acknowledge the existence of differences in crime statistics between races, so 'the reason remains unknown'. Hilarious.
At least this article has a _slightly_ less hostile title, but the fact remains: it's not white men's fault that they have been innovative and early in the field of AI. If one absolutely must drag identity politics into AI it would be more accurate to say that women should participate more, and that's on them. Not white men.
Regardless, I don't see how being sexist and racist (as these articles are) helps anything.
What's hostile about the title? And you say that these articles are racist.. that kind of reaction is really common when actual racism is being pointed out. It can be hard to differentiate and I've found myself feeling something/someone is racist when it's really that they're pointing out how I'm being racist, and my reaction is to be defensive, instead of actually examining inside and unwinding my conditioned thoughts a d responses.
You say it's not white men's fault that thy were innovative early in AI. Consider though when the field of AI was first being developed (in the late 50s) which black or people of color in the US would even have had access to the same resources that e.g. John McCarthy had. And why would they have had difficulty accessing those resources? Because of the laws white people had in place then.
So to say that it's white men's fault may be too simplistic, but it also has historical truth.
But explain how it's an attack. I wager that /you/ feel attacked.
Saying that a distance in the past has no relevance on people studying the field today is ridiculous. You're saying "the past has no relevance". Especially considering scale, 56 years is not a long time for things to change. There's ample examples of how what was so socially and politically that affected black people 56 years ago continues to exist today.
It says that there's a problem and it lays it at the feet of white men. If that's not an attack, I don't know what is.
EDIT: Since I can't reply anymore, I'll just add my comment here. I (and most people) are in no way responsible for the lack of women in AI. If you personally feel responsible because you know you've been sexist/racist, great, make amends and don't do it anymore. Tossing blame at people you don't know and saying they're responsible for your actions is not the correct course. Blaming white men is racist and sexist so you can take this opportunity to learn and hopefully stop making generalizations about people regardless of their gender or race.
If someone has done an action, we can say that they're responsible for that action. There's no attack in identifying responsibility or ownership, but as I explained above, I myself have often interpreted this as an attack on me, when for example, someone was pointing out to me how I was being racist.
It's not blame or racist when it's true, so you're just being defensive. If you're "neutral" (think that you're not part of the problem) then you're still supporting and benefitting from the system that IS racist. You're complicit in your lack of action.
Systemic barriers like constant reinforcement from the media of the idea that woman and minorities face system barriers to doing what they want in life, so they never try and instead become embittered by this dogma?
The beauty of computer science is that no one can stop you from doing it. Anyone who can afford a computer can excel in the field if they put their mind to it. If you want to talk about helping people who can't afford computers, that would be worthwhile. That's definitely not a gendered discussion though.
EDIT: I fully agree with WalterBright below but have been banned by HN from continuing my conversation in this thread for daring to suggest insulting white men might be racist and sexist.
In the D language community, we have many strong contributors who we actually have no idea who they are, other than their online persona that they create themselves.
We don't know their age, race, gender, religion, nationality, politics, nothing.
It's as close to a pure meritocracy as is probably humanly achievable.
Also, trust me i can talk about this because i'm not white...
fact - people prefer people like them. Not even consciously, this is basic shit hardwired into us. I don't blame white men for being subconsciously biased to hiring white men, literally any other group would do the same. sure we can try to fight that bias, but its not at all evil or wrong to have that bias, only natural.
fact - taking an "agnostic" approach the way science does, of course the algorithms will reflect "biases". if men are statistically more likely to be programmers, or black people are more likely to commit crimes (STATISTICALLY), then the algorithm will pick that up. They are biases sure, but also statistical realities.
Now we can debate whether we should actively engineer algorithms to fight these "biases" on a case-by-case basis (for example, focusing more on women might be a win if you can find talent no one else can), but there's no reason to start pointing fingers at the "evil white guys" on top who planned this from the very beginning... it's just more stereotyping.
hypothesis - she wrote this crap to gain publicity.