Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Wikimedia director on commercial use of Wikipedia (twitter.com/krmaher)
102 points by okket on March 15, 2018 | hide | past | favorite | 98 comments


The ethics of this move are quite questionable on Youtubes part.

Dumping a large cognitive load on shared volunteer infrastructure without some level of compensation? Not cool.

The fact that Youtube is doing this and is not:

* paying wikipedia for the service they are demanding of it

* offering wikipedia support in developing tooling to cope with the NEW TASKING that youtube is expecting

* talking to them about it before announcing it

Suggests that Youtube does not understand its own role in the media ecosystem and is exposing itself to systemic legal and moral risks that could have adverse effects on the company itself and possibly on the entire industry.

If someone gets to a Flat Earther video via autoplay ( start from watching aircraft videos.. ) are they better served by being directed to https://en.wikipedia.org/wiki/Myth_of_the_flat_Earth orby being shown a countervailing video explaing that the earth is in fact round and there's evidence and how to check it?

Does Youtube have a responsibility to help society stay within shouting distance of the reality principle? I think it does.

Do you?


YouTube can't pay Wikipedia for the service because this isn't a service Wikipedia offers in the first place. And honestly if this problem could be solved by throwing money at it, Google would already have done so. They essentially want someone to point the finger at the next time someone blames them for inaccurate content, and knew that there was no way Wikipedia would agree to it with an official partnership.


The rude part is that Youtube is forcing Wikipedia deeper into the fake news wars, and not even talking with them about it, apparently.


Thank you. I was beginning to think that reading comprehension was a lost art here at HN


I rather think you're being extremely dramatic, and with zero justifications for why this would suddenly add a "large cognitive load" to Wikipedia (relatively to the load they are already sustaining today), or why it's a "new tasking" that will require new tooling to cope with (rather than reusing the tooling and processes they have been developing for more than a decade to deal with contentious topics), or why it exposes YouTube and the entire industry to "systemic legal and moral risks".


> and with zero justifications for why this would suddenly add a "large cognitive load" to Wikipedia

Because if a video about say, 9-11 conspiracy theories, will have an infobox added to it which quotes the first par of a Wikipedia article, makers and viewers of such a video will go to Wikipedia and edit the first par to front-load the information they want to see. This will happen extensively across hundreds of thousands of topics. Many of Wikipedia's policies and processes won't apply to these new edit wars because they are about whether or not information can be included, rather than which information appears in the introductory part.


Many controversial Wikipedia articles are already protected in some ways. This is nothing new.


How does forking over cash help with the burden on said volunteer infrastructure? Is there any relationship at all between Wikimedia's monetary expenditures and editor effort expended?


hiring developers to help develop software to make editing easier. wikimedia has stated this as one of their uses for donations in the past afaik


That kinda sounds like a failing on Wikipedia's end.


Youtube is indirectly compensating Wikimedia though; the benefactors page lists Google Matching Gifts as a major benefactor (https://wikimediafoundation.org/wiki/Benefactors). Looks like that's a 'matching' foundation of sorts that will match / double certain donations - maybe from googlers?


I agree with the first part of what you said.

As for the second part. Can you convince me of why I should trust Youtube's owners to police what I can see and hear, rather than rely on my own whims and judgements?

Let's say we do allow our largest corporations to police thought. In that case, would it be OK for me to question or dissent?


> Can you convince me of why I should trust Youtube's owners to police what I can see and hear, rather than rely on my own whims and judgements?

Asking this question is precisely the systemic risk to Youtube. If there's thought policing going on; there will be a host of players with "the best of intentions" who want in on it. From Animal Control to the Pentagon; every branch of .gov will want a piece of that pie as will most of the quasi-governmental and non-governmental stakeholders.

If only truth is allowed; whose truth?

If fiction is marked as such; who decides what is not fantasy?

If some powerful faction makes money from people believing that chemtrails are government mind-control drugs being spewed into the air to make us all more compliant and effeminate; should they be allowed to spread that belief at will? What happens if the profitable meme is that a certain ethnic group is diluting the purity of a majority ethnic groups heritage and should therefore be exterminated? Do we allow that?

What are the limits of acceptable folly?


Or what if Harvard scientists said that sugar is good, fat is bad, because they were paid some bribe?

So Youtube bans videos spreading "fake news" that disagrees with the "science" by "foremost industry experts."

What if it was about leaded gasoline? Or $500 million is missing Haiti charitable funds.

Or the catalyst for the Vietnam War, the Gulf of Tonkin, as well as the WMD claims that set off Iraq War, were all lies?

All these things really happened.


Maybe youtube is unhappy with their current solutions?

Some videos* are already walled up behind warnings and disabled features.

* https://www.youtube.com/watch?v=U1Qt6a-vaNM


Off topic, but it's damn ironic that a video decrying rich, pervasive manipulators is hosted on a platform controlled by the rich, which then tries to guide people away from the video, by disabling its standard attempts to guide people toward the video, which ends up making the viewing experience better.


I watched about 3/4ths of that conspiracy video for fun and I don't even get it being restricted under "hate speech." There are parts that are obviously either untrue or exaggerations but the whole thesis of the video is "american fascists are behind everything and world war II was invented to be a profitable enterprise for industry captains."


The hypocrisy is that if you tried scraping google results they would block you so fast your head would spin.


But is that really comparable? Wikipedia is compiled by volunteers, but Google search algorithm or results are not. If Wikipedia was really a paid service, it very likely would not have access to so many unpaid volunteers.


Abusing the work of volunteers is a morally worse act than abusing the work of paid services. I don't see the argument that the opposite should be true.


Why is this "abusing volunteers" but large coorperations using open source tools not?

Wikipedia is open for everyone, and in my opinion that means you don't get any right (legal or moral) when someone, anyone, uses that information within the rules you set in the license.

If they want to prevent this, they should change their license.


It is true that if Wikipedia wanted to use legal means to prevent it then they should change their license, but it is equally true that if Google want to prevent users from scraping their site then they should stop making the search site public available. The web is open for everyone, and thus following your argument the rights (legal or moral) should not apply.

Naturally Google won't prevent users form accessing the search since that would invalidate the purpose of their site, and equally naturally Wikipedia won't change the license since it too would invalidate the purpose of their site. Both however do complain and may invoke blocks when they feel that people are abusing the permission that is normally given.

From a legal stand point, Google can arbitrarily revoke access to their site and in similar way wikipedia could revoke access (which is not the same as changing the license) to Google. They could block Googles bots, or change the targeted information for which they intend to scrape. Wikipedia could publishing a explicit statement on how Google (as a legal entity) may access their servers and Google would be legally bound to follow it or be charged under hacking laws.


> The web is open for everyone, and thus following your argument the rights (legal or moral) should not apply.

I didn't say that, and I disagree with that. The web isn't "default open", and Google has a license that you agree to when using their product that says what you can and cannot do with it. And scraping their site is against that "license" (aka the "terms of service") giving them both the legal and the moral right to stop you while keeping it "publicly" accessible.

Wikipedia's terms of service specifically state that anyone is free to read, print, share, and reuse their articles and other media under free and open licenses. They have no more right (moral or legal) to take issue with youtube including wikipedia snippets in their application than Linus Torvalds has the right to take issue with Google using Linux in their application. As long as you follow the license to the letter, you are legally fine, and in my opinion as long as you follow the "spirit" of the license, you are morally fine.

I don't see this move by google as violating either of them. The director of Wikimedia might not like it, and that's fine, but Google is in no way under any obligation to stop what they are doing (again, legally or morally in my opinion).

Wikipedia can adjust their terms of use, but they themselves have a framework for how they can update their own terms, and part of that requires a 30 day comment period [0]. If they have issue with this, they can absolutely update their terms, but I would never donate to the wikimedia foundation again and would stop contributing if they are going to start deciding who is allow access to their data that they champion as "free and open".

[0] https://wikimediafoundation.org/wiki/Terms_of_Use/en


The term of use is not the same as the license for the content. As is stated in the summery of the linked page, permission is granted under conditions such as "No Harm – You do not harm our technology infrastructure" and "You adhere to the below Terms of Use and to the applicable community policies when you visit our sites". Those conditions are not part of the license, which is a important distinction.

So all they need to do if they wanted a legal tool to prevent google is to create a community policy which dictate how scraping may be done for the purpose of youtube. No 30 day comment period.

Further down on that page, section 10. Management of Websites: *"The Wikimedia community and its members may also take action when so allowed by the community or Foundation policies applicable to the specific Project edition"

And finally in section 12. Termination: "We reserve the right to suspend or end the services at any time, with or without cause, and with or without notice."

In summary, they reserve the right to block access in response to abuse.


But copying and reproducing snippets from wikipedia doesn't do any harm to their technology infrastructure. It would be a negligible amount of hits to their system to grab and update the cached snippets.

And you are correct that the terms of service and license are different (even if there is a bit of overlap) but it's the license that matters when you are "copying" the content and reproducing it on your own, and the license is either CC BY-SA or GFDL both of which have no ability to prevent any one specific person from using the information.

>In summary, they reserve the right to block access in response to abuse.

This isn't abuse, its use. If copying the data from wikipedia and serving it up with attribution is abuse, then they need to update their terms of service and licenses to explicitly say so (which would pretty much end wikipedia). If Google is harming their infrastructure by causing excessive load doing the scraping, wikipedia has every right to block them. But Google is not (as far as we know), and therefore wikipedia does not have any moral right to block access, and Google has no moral or legal obligation to stop scraping and reproducing the content on their own servers. And if wikipedia does block Google from the service, Google has every right (legally and morally in my opinion) to find an alternate way of getting the data and reproducing it that doesn't access wikipedia's servers directly.


The thing is Wikipedia is out there for anyone to link to, without paying a dime, and there are no legal requirements for compensation.

Now, you and others may think they have some ethical responsibility (I'm not sure I agree with that) but if YouTube had to parse all the ethical requirements their users thought they had to abide by, they'd never get anything done.

B2B interactions are never bound by the same ethical constraints that human-to-human interactions are. Corporations are not people - they can be bound by laws but not arbitrary social conventions.

Near as I can tell, Wikipedia is SOL unless YouTube wants to help them out. YT is certainly not obligated to.


> Corporations are not people - they can be bound by laws but not arbitrary social conventions.

Corporations are bound by social conventions for the same reason natural persons are: if they sufficiently violate them, people impose consequences on the corporation by actively avoiding interactions beneficial to the corporation.


This is not about linking. Anyone can link to Wikipedia articles without paying a dime. There is a very significant difference between linking to an article like what HN does, and copying new articles and publish them which HN does not.


because if it’s legal, it’s morally acceptable right? i guess the patent trolls are in the right then.


You make an interesting point, though I think you're placing a little more responsibility on YouTube than they currently warrant. Perhaps in 20 years the argument you're making would be taken seriously if YouTube continues on it's current trajectory to become as much a tool for information dissemination as it currently is for entertainment.


That's not 20 years down the road, that's now.

https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-po...

This is an opinion piece, but the author is a respected professor who has done some really interesting research on this stuff, so I'm inclined to take it seriously.


Am I the only one who thinks that nothing should be done about the "fake news" issue?

This debate is based on all sorts of fallacies. First, that fake news elected someone. It did not. Second, that this is something new or inherent to the Internet. It is not. Fake rumours and gossip have existed ever since we have existed. People's judgement has been the sole thing holding the whole system together for hundreds of years. Bankers, for example, spread rumours that Nikola Tesla had sex with pigeons. It was fake but it spread like fire in NYC. Did we ban free speech because of such things? No.

From most the discussion threads here it seems that censorship is the only way forward, which scares me because people are falling for the trap and they are justifying censorship.

Who will decide what videos are "controversial"? If we agree that fake news should be regulated, what is the limit of state regulation? Why not then allow them to censor most other politically incorrect forms of speech?

It seems to me like this is headed down a very dangerous path and it's scary how people fall for it. Fake news is absolutely irrelevant and how it's been blown up and is being exagerated in order to justify censorship is a conspiracy in its own right. I'd be more concerned about this particular conspiracy: someone is trying to justify censorship because one particular candidate won an election.


Fake News is the media trying to cover up the embarrasment about not doing their background checks before copy-pasting something and publishing it. Now someone has to take the blame.

I'm sure someone is very happily converting this into a chance for control over the censoring.


This bears no resemblance to reality... You can disagree with the “fake news” narrative, but you should have the decency to attack a good faith interpretation of it, not whatever straw man you’re alluding to.

Here’s the gist, just in case you are actually confused: there are some sites both large and small, from shady operations in the balkans to infowars, that publish stories completely made up. One example might be the child abuse enterprise run by Hillary Clinton from the basement of a pizzeria.

As we all know, that pizzeria didn’t even have a basement. So we can agree, hopefully, that this was an insane conspiracy theory.

Such stories were/are extensively shared on social media in the run up to the last election. And while they may not have been decisive, it’s hard to argue that they had no effect whatsoever. At the very least, they fed the cynicism and distrust already rotting in the core of society.

That’s the idea of „fake news“. You can dispute that it has any effect. You can argue that some good comes out of this free-for-all. But note that traditional media publishers simply play no role in this. To make this about some perceived failure of the New York Times takes quite a lot of logical gymnastics.


This is exactly what I mean. If it was on some obscure webpage, noone would know about it but main stream media copy-paste news like this just because they know people will click on it, not bothering to check if it is true or not. For the newssite that is less interesting as long as they get clicks. Of course this does not mean there are responsible journalists that does fact checking on everything, they just get less with time.


The problem is not that there exist people who are creating fake content and hosting it on youtube. If that were all, it would be (mostly) fine.

The problem is that gamification and engagement mechanisms pull people deeper into that rabbit hole. Go see what the recommendation algorithm does with "fake news" style content sometime. It'd be harder to build a system to "radicalize" people towards extremist ideas if you tried.

As one guy on twitter put it: "my dad and his iPad went from vaguely interested in the knights templar's operations in portugal to believing in ancient aliens seeding global cultures in the space of like three weeks"

I probably agree with you that there shouldn't be straight up censorship. But neither should youtube blindly and algorithmically identify that someone is susceptible to conspiracy-style content and then drive engagement by serving them a never-ending stream of the stuff.


You are not the only one. The thing seemingly appeared out of thin air sometime between the Brexit vote and Trump running for the Whitehouse. That the news organisations are talking about it more and more is tragic. At least Trump twists the terms to make it more relevant to them.

Fact: There was a small case of "troll farms" producing totally click bait content. But the effect was minimal. This is what was originally meant by "fake news".

Now fake news can literally be anything, and the resulting moves to counteract anything will be bad. What also concerns me is that most of the voices against whatever fake news is, reside on the left hand side of the political spectrum.

Okay. Now. lets assume that Fake News is responsible for manipulating and influencing stupid people into voting wrongly. The way to counteract that is by education, by a free press, not by counter propaganda and censorship.


You’re stating “the effects were minimal” as a fact, but that’s impossible to prove. Some of these stories featured prominently on reddit, for example. And nobody knows what others’ Facebook feeds look like. It’s not inconceivable that some of these stories may have moved the needle in some regions or communities.

Just look at how much political capital Obama had to spend on countering the “birther” nonsense. That was 8years ago, and it crowded out all sorts of actual policy discussions, arguably gave rise to the Tea Party and everything that followed from it, and probably had effects on every election starting from the 2010 midterms.


The fundamental problem is trusting pieces of information people read on non-authoritative sources like random social network posts being liked or twitted. People are granting such trust, which is bad but not so much. More dangerously, however, we have authoritative sources such as reputed news venues granting more and more trust, or at least, giving credit, to such garbage sources.

I don't know if the government should get involved, but if it is to, then it should only be a educative and preventive job, to warn people that:

* Other than reputed news sources using their Twitter account to spread information, it is to be assumed than information from social networks is garbage

* When a reputed news source is referencing information from Twitter or other social networks, they're doing a bad job


I’m not sure which “reputed media sources” are granting any trust to any unsubstantiated stories they find on social media. Fox News does. And while they have a reputation. I doubt it’s the sort of reputation you mean.

As for Twitter: it’s just a tool, like email. If the president uses Twitter, and actual policy news break there, it’s perfectly fine to report on it.


Do you use search engines? All of them use their own sense of judgment to decide which content is best for your query. You rely on curators in most areas of your life, why would news be exempt?


No, you are not the only one. I cringe every time I read pieces about this. Calling out "fake news" is just the newest variant of placing blame on everyone/thing but ourselves.


I sure hope YouTube gives Wikipedia admins access to a constantly-updated list of which pages they're linking to from conspiracy videos, so that Wikipedia can lock down the editing in much the same way they do for developing news, or recently-deceased celebrities.


Good idea. I'm also worried about the increased chance of trolling / false editing that results from Youtube links. But I doubt Wikipedia is ready to or wants to be this involved in collaborating with any for-profit.


If the browser sends the referer, Wikipedia will be able to figure out themselves which pages YouTube points to. Then it won't be such much intentional collaboration, but instead damage control to prevent vandalism. I doubt Wikipedia is going to like doing that, but they don't have many other options.


Who's choosing which videos are conspiracy theories? For that matter who's choosing which Wikipedia facts are shown with the videos they label as conspiracy? Are there no two contradictory pages on Wikipedia?

Are their demonitization efforts not working well enough to shut down those who disagree with their viewpoints?


I remember that Google offered to support Wikipedia (financially and otherwise) several times over and was denied. It’s possible that the offer came with strings attached, but I can’t imagine Google being that clumsy. I have little doubt that if the foundation needs more money, YouTube will be more than happy to help out. But, so far, Wikipedia has refused because they want to remain as impartial as possible and can’t rely on too large swaths of funding from too few sources.

YouTube has tried to implement moderation and failed to address conspiration-wielders. No matter what they do, someone will decry it as “a platform controlled by the rich” [0]. Wikimedia has been addressing similar issues for almost two decades and has proven more effective than YouTube. Or anyone, really: there is a reason they are scraped for truth. Snopes itself shouldn’t be trusted as a direct source anymore than Wikipedia. The difference is that Wikipedia allows a debate.

Katherine Maher wants use to “read @Wikipedia with a critical eye” [1] and who is going to do that better than Flat-Earthers? That change is going to be disruptive but will help Wikipedia develop its own critical-thinking.

Is it a good move for Wikipedia? No, it’s exposing itself to critics, which is painful, but it is what the foundation explicitly say they need.

Should YouTube have made a deal, given Wikipedia money to help with that problem? Probably not, it would hurt more than it helps.

Is YouTube exposing Wikipedia’s brand more and given then an opportunity to communicate on their mission? They have. I’m more likely to give to them if they tell me that they need money to face DDOS from the Flat Earth society than if they keep offering a very easy to overlook service.

I’m not saying Google is innocent: they should attribute their facts to the proper author list in their snippets, for instance, but I seriously doubt that they would hesitate to respond to any request for support.

[0]: https://news.ycombinator.com/item?id=16590852

[1]: https://twitter.com/krmaher/status/973795473113968641


This seems like an example of Wikimedia cancer.[0] They should be crowing about the novel ways in which people are spreading their freely licensed knowledge without a pile of cash, obstacles, and bureaucracy thanks to the wonders of copyleft.

Instead they seem to be complaining that someone (albeit a huge multinational corporation, though I don't see why that should matter) dare use their content without ponying up cash as a "donation" to the cause.

[0] https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2...


The tweets are all about promoting editing, and the dangers of turning Wikimedia into a read-only source of Truth, rather than something that should be seen as a shared resourced to be improved and verified. She barely mentions donations, and only in a list of all the ways people can contribute.


Couple of years ago I had a colleague who was obsessed over getting his name on Wikipedia. So he had someone add a page for his town. Then he edited the page to include about how he was the star captain of a local league. To play a practical joke on him, we edited the page to remove his name. Soon it devolved into a cat and mouse game of editing and re-editing the page.

I don't know if wikipedia now strictly monitors page edits. But, if they don't I hope Youtube also adds a big red warning on the page - Do not believe everything on the internet, even if it is wikipedia.


Depends on the article but I've seen some moderators take it way too seriously to the point of rolling back an edit simply because you didn't ask them, the person you're somehow supposed to know "owns" the page. Kind of defeats the purpose of a wiki and has turned me off contributing to it.

I think what youtube should do is just put a disclaimer on videos saying "This video is related to an unproven conspiracy" or similar. I doubt linking to a wikipedia page is going to change anyones mind.


That is grossly against Wikipedia policy and that editor could get sanctioned, including being blocked. See https://en.wikipedia.org/wiki/Wikipedia:Ownership_of_content


I doubt someone literally said "you have to ask me first". One thing I've noticed in every thread on Wikipedia on HN is that no one has the faintest clue about what admins do [0], and what's going on when they're trying to edit a page. It's more likely that they changed something that was against consensus, and likely had past discussion. They were also probably told something to that effect in the edit summary. And for some reason whenever people share their grievances, they never provide more context like the page it occurred on or their specific edit. Just because it's the encyclopedia that anyone can edit, doesn't mean it's obligated to take each and every edit regardless of content, quality, or conformance to guidelines and consensus. Wikipedia is also extensively documented, [1] so there's even less reason for a forum that prides itself on being programmer/engineer types to stay so ignorant.

[0]: Wikipedia doesn't have "moderators". Anyone can revert edits and just because they opposed your edit, it doesn't automatically mean they're an admin, of which there are only around a thousand.

[1]: https://en.wikipedia.org/wiki/Help:Contents


Granted it was a while ago and things may have changed, trying to do a simple typo correction lead to what I described. I really don't have time to dig through their documentation just to do something as minor as fixing spelling. It was obvious the editor took offense to it. So thats that from me, sorry for helping I guess.

Also run into problems where pages on companies are wrong and edits are prevented even though the information is outright inaccurate. Apparently having the info on the company website isn't cite-able? Seems stupid to me.


The fact that it was a "simple typo" makes me sure there was more to it than that because no one would revert that if it were so. It's likely that it was something you saw as a typo such as the spelling of colour vs color. [0]

> Apparently having the info on the company website isn't cite-able? Seems stupid to me.

Depends what you're trying to cite. Most small pieces information can be cited just fine to primary sources, though secondary sources are preferred. [1] However, more controversial statements or claims of notability require a reliable independent source, [2] or else any company could claim "we're the best" on their page and include it on Wikipedia.

Anyway in both these cases, you fall into the trap I listed before, where you're soooooooo certain that you're in the right and the stupid encyclopedia and its editors are power hungry. Have you ever considered maybe there's more reasoning to it then? Again, I almost guarantee there were reasons given in both cases to which you didn't give the least attention. You wouldn't go making a pull request to an open source project full of code that goes against their style guide, introduces a regression, and is suboptimal and then get righteous that it wasn't accepted without question would you?

[0]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style#Nati...

[1]: https://en.wikipedia.org/wiki/Wikipedia:No_original_research...

[2]: https://en.wikipedia.org/wiki/Wikipedia:Identifying_and_usin...


Wikipedia is already notoriously unreliable and has a huge moderation problem. Passing the entire "fake news" problem down to them is the last thing we need.


This is completely wrong. Here's a collection of research on Wikipedia's reliability. In fact, it's been found to be as accurate as Encyclopedia Britannica in multiple studies. Its pharmacology entries, for example, are more accurate than textbooks. Other examples are included as well. https://en.wikipedia.org/wiki/Reliability_of_Wikipedia#Compa...


Quite ironic to link to Wikipedia as your source for Wikipedia's reliability. I do think that Wikipedia is quite reliable, so I don't have a problem with that, but I think the article doesn't quite support your point.

While multiple studies comparing several encyclopedias are mentioned, they tend to have small sample sizes (only a few articles) and being on par with other encyclopedias doesn't necessarily make Wikipedia reliable, it could just as well be that the other options are similarly unreliable (it seems like all studies found something to criticize).

Additionally, I interpret "Wikipedia's information about pharmacology was 99.7% accurate when compared to a pharmacology textbook" to mean that the pharmacology textbook was taken as the gold standard and Wikipedia got pretty close. But 3 errors per 1000 (whatever their unit of measurement was) might not be reliable enough to be used by pharmacologists.


Every statement in that particular section has sources linked, which means you can see for yourself whether it's true or not. Are you saying studies published in Nature or New England Journal of Medicine can't be trusted?

I do think that Wikipedia is quite reliable, so I don't have a problem with that, but I think the article doesn't quite support your point.

Compared to alternatives, Wikipedia is perhaps the best at combining high reliability, "freshness" and breadth of content. What's your benchmark for measuring reliability? I can't think of many others.


> Are you saying studies published in Nature or New England Journal of Medicine can't be trusted?

http://journals.plos.org/plosmedicine/article?id=10.1371/jou...

Summary

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.


Yes Wikipedia and the link sources are good but it is still ironic. You know Wikipedia could cherry pick their links. They don't, but they could.


Fair point.


I'm saying that Wikipedia has a much more nuanced discussion of its reliability (including drawbacks of the studies involved) than the summary in your comment.


> Wikipedia is already notoriously unreliable

It's still more reliable than other encyclopedias and far more reliable than a good part of traditional news websites.


Absolutely but its the easiest thing for YouTube to do.

One needs to keep in mind - they are NOT interested in building video version of Encyclopedia Britannica; they only interested in as many click on their site as possible (to display as much ads as possible). For all their care with YT business model, all their videos could be fake news, so long as enough people are sitting and watching.


can someone explain/translate her diplomatic stance to a 5 year old in one sentence?


If you see an excerpt of a Wikipedia article, please go to that page on Wikipedia, examine it critically, and correct mistakes if you see them.


is there a between-the-lines message towards Google?


"When you highlight Wikipedia content, please encourage people to contribute and to read it with a critical eye. Also, please contribute financially or with infrastructure if it helps you make money."

But it's best to read the tweets yourself, since it's based on interpretation. There's not that many of them.


Quite an irony to tell someone to read a tweetstorm lecturing them about the importance of in-depth critical analysis.


Is there a non-twitterspasm version of this essay?


When did conspiracy theories become harmful? This seems like a move straight out of “1984”.

If someone wants to believe (insert grandiose and stupid theory), who cares? YouTube started as an entertainment platform, not a learning tool.


Well there was that one guy who murdered his father last year because he thought his parents were part of the Pizzagate conspiracy: http://www.businessinsider.com/former-milo-yiannopoulos-inte...


I have a controversial and maybe stupid question that is probably unanswerable one way or the other.

If you're going to murder your parents over a conspiracy theory is there any reason to assume you wouldn't murder them for a dozen other reasons? How justified is censoring a platform for millions of users because of one mentally ill user?

Is there any evidence that being exposed to certain ideas "pushes someone over the edge" any more than what they can come up with in their own head or without technological assistance? I've read some stuff about school shootings that suggests media coverage encourages "copycat" attacks.

I'm not just trying to be a contrarian or apologist for a murderer but the causation here seems rhetorically appealing but realistically sketchy to me. People have been killing each other over politics since as long as recorded history.


> If you're going to murder your parents over a conspiracy theory is there any reason to assume you wouldn't murder them for a dozen other reasons?

I'd wager that some will and some won't, and that if you were morbid enough you could probably measure such a thing statistically.

> How justified is censoring a platform for millions of users because of one mentally ill user?

Not only the mentally ill occasionally have absolutely terrible ideas, buy into something stupid, or get occasionally conned. Adding information to videos is also not censorship - at least insofar as I'd define the term, or generally seen it defined, unless perhaps if you're going so far as to try to bury a needle in a haystack - so I'm not sure the question is relevant here unless you're worried about some slippery slope or something.

I'd hesitate to impose censorship, but adding information to pierce the walls of echo chambers - and to let e.g. "the marketplace of ideas" actually work - seems like a great idea to me. Not that any implementation of such a thing won't have it's flaws, mind you.

> Is there any evidence that being exposed to certain ideas "pushes someone over the edge" any more than what they can come up with in their own head or without technological assistance? I've read some stuff about school shootings that suggests media coverage encourages "copycat" attacks.

I feel like you've answered your own question here, but I may be missing something. Isn't what a copycat being exposed to when they read the news about one of these incidents, also ideas? Terrible, terrible ideas? Or are you perhaps asking if any of that stuff you've read rises to the level of evidence instead of clickbait hearsay? I've likely read little more than you have - references to studies that suggest the evidence is there. I don't recall reading about any studies refuting that idea or showing no correlation.


I agree with you that what youtube is doing now is not censorship by any standard definition and I don't think it's a bad thing. You might be able to argue that it's a form of "positive censorship" in the sense that a wiki on a video would be a scarlet letter but even that seems like very weak censorship.

I included the example of copycat attacks to show good faith that I'm not ignoring examples I know of that counter my own argument. What I'm trying to understand is the role of radicalization and conspiracy in violent acts. We know there are large communities of radical conspiracy theorists and yet physical violence or violent harassment is only a small subset of that community. How safely can we presume that "radical media" is the main cause of the latter violent groups? It's definitely an appealing narrative to imagine these people groom and reinforce their own beliefs until they "boil over" into violence but is that really how it works? We act as though people can be almost brainwashed into doing terrible things just by mere exposure. I'm arguing that at the very least it's more complicated than that and we should inject a little bit of nuance into the rhetoric of what conspiracy theories mean for society.


In the one anecdote in this thread, the perpetrator in question was Milo's intern. Someone receiving direct 1st-hand gaslighting from Milo himself doesn't really need Youtube to learn about Milo's conspiracy theories.


There is certainly a lot of research about the radicalization process, generally aimed (unsurprisingly) at terrorism rather than isolated murderers. I don’t have a signal succinct citation to point you to, but I am relatively confident that there is good evidence to support the claim that an individual being increasingly surrounded by media espousing a radical or fringe belief is correlated with that individual becoming increasingly radical.

That doesn’t answer your question about justifying censorship, but I think that’s not the best question to ask, since media recommendation algorithms that penalize certain radical subjects is hardly what I would call “censorship.”


>I am relatively confident that there is good evidence to support the claim that an individual being increasingly surrounded by media espousing a radical or fringe belief is correlated with that individual becoming increasingly radical.

I get where you're coming from but I have some issues with that perspective. Would you agree that a conspiracy theorist who uses physical violence and/or threats of physical violence is strong indicator of a dangerously radicalized person? Then how can we explain the large number of people exposed to radical media but only a small handful performing radical violence? Are conspiracies and radical beliefs always some kind of treadmill where you take a nibble and then inevitably go deeper and deeper until you become so delusional that you feel the need to take violent action? To me that feels too much like a just-so narrative.

Also, I think we should be concerned about what some people consider "radicalization." Everyone is against radical violence and media that contributes to that end. But what about media that makes someone a Trump voter? Rhetorically some argue that they are just as dangerous as any shooter because they create a social environment that precipitates violence, even though the causation isn't direct.

It's difficult to talk about these things without coming up with false dichotomies or bad generalizations. Really what I'm trying to get at is a "throwing the baby out with the bathwater" understanding. It's easy to get rid of the baby when it's not your baby or the baby is casually stereotyped in an appealing way.


> Then how can we explain the large number of people exposed to radical media but only a small handful performing radical violence?

Surely there's some various in the predisposition of individuals to become radicalized, but I don't suspect that means that people who do become radicalized would inevitably have become radicalized regardless of their exposure to certain ideas. The fact that terrorist groups put a lot of effort into radicalizing people indicates that they are under the impression that their efforts are meaningful. Even if the predisposition for an individual to become radicalized is already determined and unchangeable, there is an incentive for various radical groups to radicalize those people under their banner rather than some other banner.

> Everyone is against radical violence and media that contributes to that end. But what about media that makes someone a Trump voter? Rhetorically some argue that they are just as dangerous as any shooter because they create a social environment that precipitates violence, even though the causation isn't direct.

I agree that there's a continuum here, and that it's extremely subjective. I think most people agree that explicitly inciting violence is not (and should not be) protected speech/communication. But what about communication about groups dedicated to violence that don't explicitly mention violence? Then what about groups espousing philosophies which are strongly correlated with advocating violence, like blatant racism? Then what about groups that espouse philosophies described to just barely not quite be blatant racism, like "white nationalists" who explicitly advocate for "peaceful ethnic cleansing" of certain people from certain regions, but claim to not be racist and not support violence? Do we throw up our hands and say "well, they said the word 'peaceful,' therefore they're not technically advocating violence, therefore it's protected speech"?


> If you're going to murder your parents over a conspiracy theory is there any reason to assume you wouldn't murder them for a dozen other reasons?

It's quite possibly the case that people most susceptible to conspiracy theories are often also more susceptible than average to other stimuli, but then, we don't fail to limit emission of (and public exposure to) various industrial respiratory irritants just because many the people most susceptible to many of them are also more susceptible than average to other environmental hazards like common allergens.

And that is often limits with the force of law, not merely private actors choosing to limit (or even merely warn about) what is put out by their operations.


People kill and die far more often for far more reasons than conspiracy theories. Censor driving videos because it might make someone crash their car? Censor Islamic videos because it might make someone beat their wife? Censor the news because it might make people fight in whatever popular war is happening? False information seems like a particularly harmless thing.


I couldn't find any mention of Pizzagate in that article. However, they claimed his problem was the his Father supported "leftist pedophiles" which sounds like it would have been related to Pizzagate, but could have been related to something like Epstein or human traficking and 100s of millions missing in the strange, mysterious and horrible world of charitable work in Haiti. Either the way this offender is a particular idiot for what he did to his Father, among other things.

Per the article, the fact he acted out so horribly didn't surprise me when I read that he was involved with Gamergate--a horrendous display of aggressive, toxic behavior in the gaming community. I am a gamer, and I know that neo-Nazi, racist, sexist hateful ideologies are alive in gaming communities from my time spent playing games like TF2 and CS. They're kind of like 4chan--unmoderated, except the people are angrier.


> When did conspiracy theories become harmful?

A long time ago; see, as an example—but not the first harmful conspiracy theory—the theory of a worldwide Jewish conspiracy that was a central element of early-20th Century anti-Semitism.


Sounds a lot like the current conspiracy theory of whites suppressing blacks which is leading to race violence in South Africa that looks like it's going downhill fast. Should they also ban all "slavery and racism is why blacks are poor" videos too? I can't believe young people are so keen on censorship and so afraid of ideas. That's cold war type thinking. I grew up between the cold war and extreme leftism where freedom of ideas was high, so that's my standard.


The sad thing is that some conspiracy theories contain some true nucleus, that's why nuanced sources are so important.


anti-Semitic conspiracies go back many centuries further than that. https://en.wikipedia.org/wiki/Blood_libel 16th century isn't even the oldest.


I wasn't even trying for the oldest, just something that was outside of the present generation and directly tied to harms that would be familiar to most people as soon as you mentioned the conspiracy itself.


pizzagate comes to mind. That went beyond being harmless conspiracy.



To add to the other examples posted, vaccine conspiracy theories lead to children dying from preventable diseases.


When "stupid" conspiracy theories lead to real-world violence committed by their believers, they're harmful. And there's pretty clear evidence that YouTube's recommendation algorithm, in optimizing for time spent watching YouTube videos, has as a side effect optimized for sending people to the most extreme stuff on the site. YouTube has an ethical responsibility to, y'know, do something about that.


[flagged]


Do you think YouTube is a government? Why or why not?


No, I don't and I'm not sure what I said to make you imagine such a strawman.

I'm interested in whether you think YouTube should block Islam entirely, or perhaps to separate out the terrorist conspiracies within Islam from the rest and where to draw the line.

Would, e.g., this 12th Imam stuff, even from ostensibly peaceful folks who did not advocate violence, draw a ban in your view if it was also something terrorists used?

Please explain.


Why should I? Your approach obviously is going to be to make a laundry list of things you don't like and that you'll claim I somehow do like, and aggressively accuse me of all sorts of things if I don't either give up or assert perfect agreement with you.

What's in it for me to engage with you?


Please do not stuff words in my mouth, I've said no such things and we're supposed to assume good faith here, which only becomes more difficult when you're explicitly assuming bad faith on my part here.

I'm asking exactly how much you're willing to ask them to ban to "do something" and how to define the contours of that.

Proposals to "do something" about what people are allowed to say should be checked the same way we test code--by looking at all the hard edge cases. Your retort here is akin to a dev blaming QA for breaking their code by making the testcase too hard.

That said, if your idea of "do something" isn't to block the speech, please let me know what it is that you have in mind, I will give you credit that it's not entirely clear what you're suggesting here.


My original comment was perfectly clear in identifying the problem with YouTube's recommendation algorithm, as I see it: in optimizing for time spent on site, it also as a side effect optimizes for recommending the most extreme content on YouTube.

The fact that you felt the need to jump in the way you did suggests that you A) didn't actually read the comment, and B) were responding with a knee-jerk sound bite based on a ton of assumptions about me and my political inclinations.

So like I said, what do I get out of engaging with you? So far, the answer is nothing worthwhile.


This whole exchange might have been avoided by simply clarifying that your idea of doing something was to change how the recommendation algorithm works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: