Google isnt publicly funded academic institution. Whatever they are doing, in particular publishing, is part of the business/PR. So if the management sees something not good for business it is a reasonable that they decided to not do it. If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.
Even more, it sounds like Google didn't ask originally for retraction, they just asked to take into account the newer research contradicting the paper - the thing that any researcher valuing integrity over agenda wouldn't refuse.
If somebody wants to do that research and publishing they just have to find another source of funding, i guess.
Anyway, the firing wasn't over the paper, the firing was over the unacceptably unprofessional reaction to it.
> If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.
Salary aside (because I do doubt she earned $1M+/year, my guess is probably more on the ballpark of $300k~$500k and either way not really denting Google's finances), you are not wrong, but also it's worth understanding here we're entering the realm of the notion that companies can (and for many reasons should) be about more than maximizing shareholder value.
Also, if I'm being completely honest, from a PR perspective this could be worse than Timnit's paper might've been just given how public it has become and the people involved. People internally are perhaps more comfortable having that paper not be published and not having Timnit in their ranks, but as far as PR for Google goes this isn't great.
Yes, this is absolutely far worse than just letting the paper be published. AI ethics papers are not exactly the kind of material that gets a lot of conversation at the best of times, outside that world, but Google firing a black woman for speaking up is the kind of thing that definitely does get talked about (as we can see here).
But that aside, Google should want this kind of paper published. They absolutely should want to know and discuss every possible weakness in the ethics of their approach to AI - Google has a scale of influence so large that how they act in areas like AI, trickles down to many other organisations. To me, that gives them a responsibility to make it as ethical as is reasonably possible, and that will only happen if experts are allowed to speak freely.
One can make short-term arguments about how that hurts them, but the long-term damage of getting massive AI systems wrong, will be far, far worse.
Even from the narrow view that in-house academic work is part of the PR budget (which I disagree with), Google has made a huge mistake here. This is a giant PR black eye for them. If the game is to pretend to have in-house ethical checks (say to avoid actual regulation), then they need to at least generate the appearance of independence. The correct sinister move here would either been to keep her on staff and give her the runaround or manage her out the door in a way that she wasn't particularly angry and where she signs a non-disparagement agreement.
But as others point out, it's entirely in Google's long-term interests to have internal critics who prod Google and the rest of the industry toward long-term behavior. So I think it makes good sense for them to have independent academics that occasionally make people uncomfortable.
From a certain narrow, selfish perspective it's reasonable for Google to not want to have an AI ethics department placing a check on their leading edge research at all. Fortunately, we don't live in a world where corporations are the ones to determine right from wrong with total impunity.
> AI ethics department placing a check on their leading edge research
that reminds how in USSR each non-miniscule factory, organization, etc. had "the department #1" - it was an ideological check and control department which at sufficiently large/important organizations even included KGB officers.
You have identified a similarity between two situations, but it is not a similarity that matters. The distinction that matters is one of normativity, and on that measure there is clearly no equivalence to be drawn here.
every time it is the same - somebody got the power to enforce the prevalent ideology of the time and place, they happily do it under the premise that it is the most right and good ideology, and because of being such visibly pious followers and strict enforcers these self-declared occupants of high moral ground start to feel and behave themselves as more entitled and better than others. They highjack the cause and frame any disagreement with or critique toward them as a heretical attack on the cause. The main point here is that once something becomes an ideology the "right", "good", etc. gradually lose any meaning in that context, and the only thing which really continues to matter and grows more and more is the enforcement of the ideology.
You are right that there have been many iterations of normative standards, but that does not imply that all situations, ideologies, positions and so on are equally correct. It does not mean that we should stop trying to do better, nor that we have made no progress made through these efforts toward a better world.
No, they're describing a particular scenario where the Political Officers of those norms wind up being a sick joke of careerism and weaponized ideology.
The Soviet Union was about equality for workers. Who could be against that?
I should have been more precise. The phenomenon the other poster was describing is independent of a particular norm or ideology. Talk of evolving norms misses their point.
I see. Yes, any norm or ideology can and often does grow cancerous and counterproductive. What I mean to do is cancel one implicature instantiated by that statement. It's not a reason to be a nihilist, or to stop holding things accountable in a normative sense, in this case as justification for giving Google unchecked free rein of AI development. That the Soviet Union preached and botched "equality for workers" doesn't make it any less important an issue, and indeed we could see every failure toward that end as progress, as in "finding 10000 ways that don't work".
In most cases, yes. In this case, because the paper was about bias in Google's AI models, it might not be just a business decision because the racial bias described in that paper might result in a disparate impact on users, which could be in violation of state or federal law.
1. There exist laws to prevent discrimination against people based on protected attributes
2. ML models make predictions based on attributes without interpretability (it's not possible to prove that protected attributes are not factoring into model predictions)
3. Empirical observation that a model proxies a protected property exposes corporation to liability for regulatory non compliance
4. Therefore any study that could expose bias of a model used in production is to be road blocked or prevented ...
To combat flows like above -- seems like regulators are going to need to update rules with third party audits and an incentive structure that encourages self-regulation and derisks self-detection and self-reporting of non-intentional violations... ideally google should not be put into a position where it is incentivized to police its own ai ethics research to ensure that such research doesn't expose their own illegal/non-compliant activity ...
A company can still protect themselves by fixing the model and delaying publication of a study about it's bias until after the statute of limitations had expired.
In this case, there were recent changes to the statute of limitations for CA laws that extended it from a year to 3 years, which could be why this whole process seems weird.
well, imagine a manager in your company publishing a paper stating that your company products are probably violating state or federal laws. All that without raising the issue up the proper management chain, without working through the correct procedure with compliance and legal depts, and without going to law enforcement if the law violation is still continues after all that.
Even more, it sounds like Google didn't ask originally for retraction, they just asked to take into account the newer research contradicting the paper - the thing that any researcher valuing integrity over agenda wouldn't refuse.
If somebody wants to do that research and publishing they just have to find another source of funding, i guess.
Anyway, the firing wasn't over the paper, the firing was over the unacceptably unprofessional reaction to it.