Hacker Newsnew | past | comments | ask | show | jobs | submit | more modmans2nd's commentslogin

Prior to this algorithm most Epic customers had an alert for providers to check for Sepsis in a patient. In most cases the Epic tool is an improvement in over alerting.


That provides important context. However, based on the news, Epic has not been honest in their marketing of this feature. If you provide something that is less harmful but still harmful, you have to disclose that the real solution would be something else - fixing the insurance system, liabilities?


…..


Being underground makes it hard for those wishing to spread it to spread it.


Maybe I shouldn't have mixed up two different aspects of this question. "Taking it underground" relates more to the expression of ideas as the discussed in the article. I don't see much of this in the use of the classical Nazi symbols.

On the other hand the law has been used to a ridiculuous extent in banning Nazi symbols from all sorts of cultural artifacts such as games or movies. Doesn't make any sense to me.


Deplatforming it helps prevent it from normalizing in the community.


That is with the assumption that 1) the platforms are controllable and 2) those that control it vaugely align with your beliefs / morals.


No, just makes it edgier for the kids. Takes longer to crystalise but crystallises harder.


[flagged]


Yes.


Perhaps a less phonetically terrible name?


Let’s not forget, companies still have to remove child porn due to other statues so a repeal of 230 would still result in theoretical liability.


They wouldn’t have to stop moderating. Some assholes would sue them and the courts would have to decide how the 1A handles such things.


Section 230 doesn’t give social media any more power than they have under the first amendment. It is simply a shield from nuisance lawsuits.


It does though. For example, it gives them the power to refuse to remove false information without being held responsible for it, which the first amendment does not give to publishers like the NYT.


So there's a nuance there you're missing.

Newspaper publishing is opt in; that is, anything published they chose to publish.

Websites that allow third parties to post content on them is opt out; that is, anything published did so without initial moderation.

If a website operator posts their own statements, they can theoretically be found to be libelous. They can't be held accountable for posts by other people. Newspapers potentially can (though I've never seen a court case where a newspaper was sued for something in the Opinion section), but they -chose- to publish that item.

Realistically websites should be thought of as a public bulletin board. Should you be able to sue the person who put up the bulletin board, for content that was posted to the bulletin board by other people?


I understand the nuance you describe, but the situation I described is, at times, a problem with Section 230. For example:

> When a US Army reservist found herself at the center of a conspiracy about the coronavirus earlier this year, her life was upended.

> Hoax peddlers on the internet falsely claimed that Maatje Benassi was somehow the world's COVID-19 patient zero. Over time, conspiracy theorists posted at least 70 videos across multiple YouTube channels claiming that Benassi had brought the virus into the world. Along with those videos came death threats, which Benassi and her husband, Matt, took seriously.

> But at first, the couple did not know how to respond. Trolls hiding behind aliases on the internet were almost impossible to find, and the Benassis could not sue YouTube for allowing the content to be posted because of a now-controversial law known as Section 230.

https://www.cbsnews.com/news/section-230-60-minutes-2021-01-...


But absent Section 230, YouTube would presumably be in a position where it couldn't take down any content that wasn't actually illegal in some way.


My understanding is that you would sue the person who originally posted the content. You can sue "John Doe" and subpoena the social media companies and internet service providers for information to identify the poster.

https://revisionlegal.com/internet-law/defamation-attorney/i...


Seems reasonable, just costs a lot of money. The cost of harassment should out-weigh the cost of protection here tho.

A) post video to YouTube for $0

b) contact lawyer to subpoena YT and then sue a jerk? $50,000


Which the removal or not of Section 230 doesn't change. No matter what the law says, no matter what culpability exists, if you can't afford a lawyer, you're not getting anything. An issue with the law in the US, but hardly relevant to the issue at hand.


But YouTube has deep pockets, so if you could sue YouTube, lawyers would work on contingency. What lawyer would take a John Doe case on contingency?

Worse, what if the defamer is able to hide their identity or from a jursdiction that doesn't care about an order from US courts? In that case, even paying for a lawyer won't help.


If you have infinitive flow of third-party posts, where is the distinction between choosing to publish or choosing to not publish?

If you add black to a white background, you get black. If you remove white of a white background, you get black.

If the intention behind the action is the same, and the outcome is the same, should the legality of it hinge on the action?


That’s not true. The liability shield only covers content produced by other entities, e.g. tweets. Twitter is still liable for content it produces itself, such as fact checks and trend summaries.

Likewise, the New York Times is liable for the articles published by its own writers, but it bears no liability for the comments section.


But the NYT can carry liability for letters to the editor published in its dead tree format -- see https://www.rcfp.org/supreme-court-will-not-hear-letter-edit... as an example of a local newspaper being held liable for letter-to-the-editor-published defamation.

The CDA draws a bright line between content "authored" by a firm and content "made available." In practice, that line is fuzzy.

As a hypothetical example, Twitter probably should face liability if it took a random tweet (say) accusing Bezos of pedophilia and made an editorial decision to promote that tweet to all its users, but it could still plausibly claim that it was just making the content available.

It's a complicated topic, and I don't know where the best balance lies.


The tweet promotion is an interesting point, but the letter to the editor is easier IMO. It's assumed that a human has read and selected the letter to the editor, which is why they'd have liability. For the promoted tweet, my first reaction would be to say, if a human affirmatively promoted it, they'd be liable. If it's pure algorithm, they wouldn't be if they took it down when served a notice.


That’s not the current situation under Section 230. You can even re-tweet or forward content posted by someone else and not be liable. Only the original author is liable. This is sensible baca use otherwise all sorts of innocuous relaying, trending and categorisation activity normal of forums and social media that affect the scope and visibility of posts could trigger liability.


> The liability shield only covers content produced by other entities

That's what I meant, but you're right, I wasn't entirely clear. Thanks.

That's a protection that neither social media nor the NYT (for comments) would have without Section 230 if they do any moderation (at least according to Stratton Oakmont, Inc. v. Prodigy Services Co.)


> the power to refuse to remove false information without being held responsible for it, which the first amendment does not give to publishers like the NYT

Yes, the First Amendment does protect speech that gives false information. We had a recent HN thread on just this topic:

https://news.ycombinator.com/item?id=25695190


You obviously didn’t bother reading the article


I predict cynicism will predict silly things.


Your opinion is not grounded in fact or reality.

Bizarrely, Prison Officers' unions are arguing that prisoners should get ahead of people who haven't committed crime to get the Coronavirus Vaccine. See this: https://www.usatoday.com/story/opinion/policing/2020/12/11/h...

This is just the beginning. There will be plenty of such public employee unions that will be jockeying for access over private citizens. And these unions spend huge amounts of money on their preferred candidates in elections to the State Legislatures, among other governments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: