> Firstly, this is a content policy, this is separate from actual enforcement. I would doubt that google enforces this proactively, rather they want a possibility to shut down what has now been dubbed 'fake news'.
Discretionary enforcement power is part of the problem, not a mitigating factor. The policy itself simply gives them carte blanche to remove content with which they disagree:
> When applying these policies, we may make exceptions based on artistic, educational, documentary, or scientific considerations, or where there are other substantial benefits to the public from not taking action on the content.
Even if we give Google the benefit of the doubt and grant that initial enforcement could be judicious, wise, and a net positive for society (pretending like "a net positive for who?" is an easy question to settle), "substantial benefits to the public" is not a limiting principle.
History has taught us that without real, adversarial constraints, this power will always be mishandled and abused. Eventually, Google will make mistakes. In their zeal to prevent misinformation and harm, they will bury a promising drug therapy and it will cost lives. They will disallow evidence of a crime, and they will make exceptions that happen to benefit their biggest markets.
They have the right to do this, but it is surely wrong for us to delegate our judgement to them.
Discretionary enforcement power is part of the problem, not a mitigating factor. The policy itself simply gives them carte blanche to remove content with which they disagree:
> When applying these policies, we may make exceptions based on artistic, educational, documentary, or scientific considerations, or where there are other substantial benefits to the public from not taking action on the content.
Even if we give Google the benefit of the doubt and grant that initial enforcement could be judicious, wise, and a net positive for society (pretending like "a net positive for who?" is an easy question to settle), "substantial benefits to the public" is not a limiting principle.
History has taught us that without real, adversarial constraints, this power will always be mishandled and abused. Eventually, Google will make mistakes. In their zeal to prevent misinformation and harm, they will bury a promising drug therapy and it will cost lives. They will disallow evidence of a crime, and they will make exceptions that happen to benefit their biggest markets.
They have the right to do this, but it is surely wrong for us to delegate our judgement to them.