Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You do realize that anything you do for your employer belongs to your employer? And that protecting trade secrets is part of practically every employer agreement?

There are whistleblower exceptions, but generally, if you can't be trusted to keep secrets, you're not a professional.



This is whistleblowing.


Whistleblowing only applies to things that are illegal. Just because you have a personal objection to something doesn't mean you can leak whatever you want.


Just to be clear, you're saying that it is unprofessional to tell the press if you know your colleagues aided in the torture and imprisonment of a dissident?


Google is doing what already happens in China. What if the torturer used Google to look up torturing techniques? We should just leak all of their source code. To go even further, the taxes China collects the imports you buy may be paying the torturer. I'm going to need you to give all your credit card information for me to make sure.


No, sorry, you can't evade moral culpability like that. Congress was correct to call Yahoo's actions regarding Shi Tao "inexcusably negligent behavior at best, and deliberately deceptive behavior at worst".


Dragonfly is about as far removed from Shi Tao as your Chinese import purchases. And since when has Congress having a vocal opinion on anything ever been anything but pandering to their voters?


Having a plan to directly feed the Chinese search and user data is very far from "removed".

Why do they have that plan? Because there is absolutely zero chance china would allow them to operate without it.


That's unclear. Google hasn't launched, so they haven't done anything wrong yet, as far as users in China are concerned. This is all internal politics, which seems to be getting increasingly nasty.


If you leak your company's plans to rob a bank to the press, you're still a whistleblower, even if the bank hasn't been robbed yet. Whistleblowers are under no moral or professional obligation to let people become victims before coming forward.

If supplying search history to a government known to imprison or torture dissidents is an immoral activity, then bringing it to light is whistleblowing. You can make a cogent argument that it's morally fine to do that, but you can't simultaneously say that facilitating spying is immoral and that it's unprofessional/immoral to reveal plans to that effect.


That would be a better argument if they were planning to do something illegal.


I readily acknowledge that it is perfectly legal to aid the PRC in imprisoning and torturing dissidents.


You keep making these arguments using hyperbolic examples that have little or nothing to do with what Google might (depending on how the internal politics work out) end up launching.

Predicting the future is tricky, particularly when it involves the decisions of complicated political processes where the decision-makers (most likely) disagree. Plans often change. We can't possibly know what they're going to do. Maybe nothing; plenty of large corporate projects never launch.


It's not hyperbole when this literally happened to Shi Tao, right down to the torture and imprisonment facilitated by Yahoo. This isn't some conspiracy theory: it's well-documented.

I certainly hope that Google doesn't follow the same path! But US tech companies have been complicit in human rights abuses in the past, so the public has every reason to remain vigilant.


Do you really think PRC torturing and imprisoning dissidents is hyperbole? Really?


Yes. Of course terrible things do happen in China. But the assumption that Google's product will cause this to happen is currently just someone's imagination.

The product hasn't launched yet, might not ever launch, and we don't know how the product will work if it did launch. I'd hope they're thinking about how to make sure nobody gets in trouble, but we currently don't know what precautions they came up with.

There are lots of companies working in China. We know of one incident involving Yahoo. Do we assume without evidence that any other companies working there have caused dissidents to be tortured?


That is the weakest, most cowardly response you could ever give.


> A computing professional has an additional obligation to report any signs of system risks that might result in harm. If leaders do not act to curtail or mitigate such risks, it may be necessary to "blow the whistle" to reduce potential harm.

Leadership was actively advancing the project and promoting obscurity / secrecy. Sundar did not act to mitigate risks of harm to free information and dissidents. People were not given enough clarity to make good moral decisions.

The entire project reeks of a top-down ethics violation. You can't with a straight face introduce AI ethics guidelines, while you have backdoor meetings with need-to-know engineers building a surveillance and information manipulation system.

An objective party within Google should work hard to protect Google's values. To me, an outsider, Sundar can't be trusted anymore on responsible ethical AI (and by extension: AI itself). Probably some misaligned incentives there.

> As a leader in AI, we feel a deep responsibility to get this right.

So get it right. Start by fixing the wrongs and keeping consistency with your messaging.

Or tell me how the planning of a opaquely censored, dragnetted, privacy-intruding, and authoritarian-friendly search platform is consistent with:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

7. Be made available for uses that accord with these principles.

We will work to limit potentially harmful or abusive applications.

We will not design or deploy AI in the following application areas:

Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

Technologies that gather or use information for surveillance violating internationally accepted norms.

Technologies whose purpose contravenes widely accepted principles of international law and human rights.

The only thing consistent with the AI ethics guidelines (a plan going forward, already abandoned on release) is the pledge to technical excellence. I am sure, as the leader in Search, that Google is able to build a fine custom solution for the Chinese government.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: