Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article explicitly makes the distinction between law and culture in free speech. Implicitly, that includes your concern about spam and porn, which are legally approved and culturally disapproved.

For example, the article points out that Apple could decide at any point to remove the Twitter app from their store for any reason. Such a reason could be that there is lots of porn on Twitter, and this could encourage Twitter to discourage porn on their platform, kind of like what happened with Tumblr.

As a consumer, I don't mind it if platforms censor spam and porn as long as there is a switch somewhere I can toggle that will let me opt out of it if I want to see spam and porn. That's just my personal preference and if enough other people express that preference then the culture around free speech will change.



> I don't mind it if platforms censor spam and porn as long as there is a switch somewhere I can toggle that will let me opt out of it if I want to see spam and porn.

In this sentence, you've gone farther than the article (and many similar posts) in defining how you'd trade off between free speech and other values. Extrapolating a bit, it sounds like:

(1) You believe "platforms" have responsibilities in free speech culture to make all submitted content available.

(2) It is ok for platforms to control default visibility of content based on what it perceives as pervasive values, as long as those controls can be overridden by users.

(3) You trust social and market competition among platforms to make sure there are platforms aligned with enough expressions of values that everyone gets speech.

I'd love to see a deeper dive on some of these points by cultural-not-legal advocates. Some questions I'd like to see vigorous discussion of:

- When does something become a platform, and start having responsibility to rebroadcast all submissions?

- How much friction is ok for a platform to introduce before it blurs the line with censorship? (Extra submission hurdles? Demonitization? Deamplification? Opt-ins vs opt-outs?)

- Are there categories of values that are ok to introduce friction around (e.g. porn) vs others that are not (e.g. politics)? How can we separate them reliably?

- What are the qualities of competition between platforms that need to be maintained to make sure the allowable friction reflects a range of cultural values?

As a speech-not-reach guy, my conclusion is that platforms are participants and inevitably express their own values through curation, so it's most important to keep competition alive at the platform level. However, I think there could be a better steel-man case for platforms having coherent responsibilities than I've seen. A lot of the discussions start strong and then devolve into breathless quotes about freedom.


With (2) and (3), I didn't mean to leap from is to ought right away. I think it's to be expected that companies that publish user generated content will choose to control visibility of content based on what it believes are the prevailing values of the various stakeholders like users, advertisers, regulating agencies and so on.

As a small stakeholder, I naturally would prefer a world in which the other stakeholders share my values, because that would make the companies more willing to do what I want. Right now that means I would like the culture of free speech to change in my favor.

If I lived in a place were regular people liked free speech but the government liked censorship, then I would want the laws of free speech to change, and the culture I'd believe to be fine.


Sorry, I forgot to directly answer the questions.

- I think a platform doesn't have a specific responsibility to rebroadcast everything. If I don't like what things they choose to rebroadcast I'll find them less useful and start using a different service.

- For content that I don't want to see, they should introduce any hurdle they want. I am only annoyed with censorship when they get in between the sender and the receiver without asking the receiver first. For example, censoring spam and porn is fine when done at the request of the user who would receive the spam and porn. Censoring misinformation is less fine because it has to be done without permission of the receiver. The receiver may be gullible and stupid, and then it looks like censoring misinformation is good. But sometimes the receiver is smart and better informed than the censors, and it's not easy to tell in advance.

- Same as the previous, the categories for which it is OK to introduce friction are those categories that the user who would receives the messages asks you to censor. For example when an ad is irrelevant there is often a button you can press to tell the platform that you don't want to see those kinds of ads, and then they start showing different ads. I would like something similar for spam, porn and misinformation.

- I don't know about competition, network effects seem very strong. Instead of having a special network only for special people who like free speech, I would prefer to change the wider culture so that the mainstream social networks support free speech. It's either that or wait for some crazy billionaire who happens to value free speech to buy the mainstream platform? Seems unreliable.


Belated thanks for answering here!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: