Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I appreciate you're trying to help but this is infuriating.

That puritanical, prudish attitude has done so much damage already, we should stop accepting it as "Americans gonna American".



I think Google's problem is that they aren't moral enough. They have no dedication to an ideal of actually caring about people or empathy, or any number of other things. They are greedy, mechanistic, and arrogant. Moral people aren't like that. They don't even uphold American values of liberty, free market, or freedom of speech. If they _were_ moral, a lot of those things would be solved.


What we should accept is that platorms which are not designed to be neutral, won't be.

Is the problem that Google caries with it the puritan American culture, or that too many people which might not share that culture rely on Google?


I think that, in this specific case, this is pretty clear: If there's a culture that does not allow a dictionary to define rape, there's something wrong with this culture.

I'm certain most Americans agree with me here which makes me assume that this specific problem is not one of culture, but probably scale and an absence of responsiblity at Google.

The more general question is interesting though, because it could go several ways. For example:

You could argue that people should anticipate that the platforms they rely on are not under their control (and should maybe act on that).

Or one could argue that the platforms should anticipate the diversity of cultural standards they are catering to by easing their moral rigidity. (For example through a more diverse/decentral company structure, etc.)

Here in Europe, some approach a somewhat similar question with some form of data nationalism, for better or worse. It plays into the same realization that there is an unresolved cultural difference between global platforms and local standards and intends to politically support local initiatives, corporations, etc. That, I think, doesn't solve the problem, but shifts the level of granularity.

Great problem, many angles.


But that's not the case at all. The dictionary that defines rape is still up and still ranking in Google search results. There's just no advertising allowed on that page. And you could argue that Google is being overly-puritanical, but you could also argue that most of their advertisers don't want to be associated with such words, even in a neutral context.


I think this is two arguments. The first one goes along the lines of: Defunding isn't problematic since it is not literally the only way to earn money.

For me that's equivalent to the idea that deplatforming isn't problematic, because people can still publish elsewhere or, worst case, still talk to other people.

Key to both ideas is to reject the social significance of operational scale as well as the power dimension of gradual influence.

Practically there is quite a lot of power hidden in the leeway and the bigger a company gets the more problematic their influence is for society as a whole.

The second argument, I think, is that there is nothing problematic about content demonetization because we always can trivially construct a plausible advertising interest against any unfashionable content, hence it's not primarily seen as a chilling effect but something innocent that just, by accident, ends up continuously narrowing the conversation towards the presentable and trivial.

I think this argument isn't great. Just because there's innocent intentions at play it does neither show that there are only innocent intentions at play nor that the overall venture does not, in the end, have bad consequences for society.

If our ad-ecosystem would allow advertisers to nudge a TV station towards what news they show, it would be a bad ecosystem for society, even if it's understandable that someone does not want to show their brand next to real talk.


I'll go along with you that defunding is a form of ipso facto deplatforming and therefore bad. I think it's trumped in this case by the advertisers rights to free association (and Google's desire to attract advertisers). But if (and I think this is your real objection) that defunding/deplatforming was aimed at a protected class or political identity, then that concern would rule.

That's not the case here, however. And I don't think we should be so concerned about a slippery slope that we can't allow any discrimination on the part of advertisers or Google.

By the way, I think I sense an undercurrent of "but that's just stupid" in regards to the objection to the extremely neutral use of dictionary definitions. You haven't made that argument explicitly, but for what it's worth, I'd agree with you on that personally. But that's not my call to make, or yours. (And if I'm imagining that undercurrent, then my apologies.)


Going from OP's story, his whole website was blacklisted from using Google Ads.


I get you have pages for those words since you are running a dictionary, but do I understand you were running ads for pages with those words specific?


Why should it matter? It’s a dictionary, words are in it. What will they find objectionable is utterly random and should not be a factor... what people find objectionable enough to hardcode exceptions for can be rather idiosyncratic https://twitter.com/techdrgn/status/1359221506165805060?s=21 for example.

More importantly, the issue is that there’s no recourse in these cases. It’s downright stupid that you can report a dictionary for this and get them permanently banned. If the issue is don’t run ads on naught word pages then google should make this list public and stop ruining businesses by practicing “I’ll know it when I see it” style moderation by algorithmic bots without human oversight.


If you are trying to sell a dictionary I see no reason why a company should allow you to advertise on the word "rape" to do so.


That's like saying Oxford Dictionary shouldn't be able to make money off their dictionary because it contains the word rape.

It's a damn definition, not an article or essay on rape. Facts shouldn't be censored because someone feels it might offend their delicate senses.


I don't mean that.

I mean it's fine to have those pages. Google also ranks them for those keywords.

But I can understand why a company won't allow someone to specifically run ads on those keywords. Like the above comment about ebay talks about.

I'm not sure whether that happened in this case or not, but that was my question


15 or so years ago, eBay appeared to be buying adverts for all noun searches on Google. Certainly when I searched for “plutonium” and “antimatter” and a few other ridiculous keywords, I saw ads telling me I could “buy it cheap on eBay”. I tried this experiment in response to news stories criticising eBay for the same with the nouns “women” and “slaves”.


Did you think they were actually selling plutonium?

This is the same problem in a different skin: words to not equal intent. When we only judge by words we restrict good faith information and promote bad faith euphimisms that do real harm.


I would guess they didn't put different adds on different words, so, those words had the exact same adds as every other word.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: