Hacker News new | past | comments | ask | show | jobs | submit login

> I personally agree with that, as I do not believe that the notability requirement is leading to increased article trustworthiness.

> Interesting idea, and I'd like to hear more of this idea! However, from this I suspect you are mixing up the reliability of an article and the specific phrase "notability" (about how deserving a topic is to have an article). When deciding if an article is notable it will usually have little or no history!

The reason I am "mixing this up" is that the only argument I have ever heard for why Wikipedia needs a notability policy at all is the one that was cited by britta earlier in this thread: that without such a clause there would be tons of articles that are hardly ever looked at by anyone, hardly ever edited by anyone, and containing information that is difficult to verify. I believe that there are numerous better solutions to this than attempting to use the "notability" filter, as I maintain that "notability" does not actually lead to "veracity".

To be very clear about this, I will repeat the context from britta that started my involvement in this discussion:

>> The notability guideline (http://en.wikipedia.org/wiki/Wikipedia:Notability) is closely related to the verifiability policy (http://en.wikipedia.org/wiki/Wikipedia:Verifiability) — from Wikipedia's perspective, if a subject has not been covered by multiple reasonably reliable secondary sources (in other words, if it isn't "notable"), we can't write a reasonably verifiable article about that subject. Every article has to include secondary sources as references, so that editors and readers can quickly fact-check.

Going back to your response:

> Its difficult because we simply do not know where NYT got that info: if Diggnation showed them some real figures (say, a screenshot) then the NYT article is better than a tweet!

I must apologize here, as I was intending to be using a different example, but you took me to mean the Diggnation example: this is my fault, I should have been more clear.

What was coming to mind with relation to the "Twitter post" example, is that there is a ton of journalism on topics I directly care about that is based on the Twitter feeds of people I work with: a lot of "person X said Y", which is then translated into some article on what is or is not possible with a tool that person X builds. The fact that a reporter read that statement and repeated it doesn't make it more true, and in fact the opposite is quite common: they are paying sufficiently little attention and have sufficiently little background knowledge that they repeat it wrong.

Please understand: I am not talking about a situation of "interpretation" or "research", but more like establishing dates on when things happened... I understand that this feels pretty blurry (but it seems equally silly to go into a detailed example of something where my example is itself biased; I am happier sticking with the examples such as Diggnation and RSA); it is simply a situation where "that dude at Wired that finds this stuff gets him readers" is not somehow more trustworthy than the place he got the information from... either the information shouldn't be published at all (I believe this is probably a quite reasonable course of action), or "that dude at Wired" should be skipped and the original source should be used.

That said, part of your comment doesn't rely on that misunderstanding I caused: it is true that we don't know where the NYT got that information, however that uncertainty really doesn't make it any more true; while it means there is some possibility that the information was gathered in a way that we should indirectly trust, as we can't see it we can't verify it, and it is honestly not in any way better than if some random person on Wikipedia just asserted it to us... reporters for major publications (such as the New York Times and Washington Post, both of which I have first-party experience with due to articles written about my work) really do trust that you are a reliable source on things that you control, and really do attempt to fact check by calling you back for verification.




Ok, fair enough. Britta touches on some of the issues, but not in its entirety. Notability is about a) requiring there be at least one reliable third party source (so that the article has a chance of containing verifiable information) and b) ensuring that there is some limit on the scope of Wikipedia. It is this latter one that is the key facet.

Whilst Notability is closely related to Verifiability it is not quite in the way britta cast it, but rather related to requiring the material used to define a subject notable (i.e. a significant claim to importance) is verifiable. i.e. the relationship works the opposite way.

I was intending to be using a different example, but you took me to mean the Diggnation example: this is my fault, I should have been more clear

Ah, my apologies, I'm reading quickly as it is a busy day.

Please don't get me wrong; the issue you highlight is a major problem, one I have raised a few times internally with the community. But there has been no easy resolution.

It's worth noting that the reliable sources policy explicitly notes that reliability hinges on not only the publisher but also the content and the author. If an author is seen to lack the qualifications, or has a bad reputation, these factor into consideration.

With that said; a lot of Wikipedians don't know this. A problem I run into constantly when discussing sources ("Well, it was published by the NYT's so it doesn't matter what their reputation is"). It's not the policy at fault there, but the lack of interest of our own community in the rules...

"that dude at Wired that finds this stuff gets him readers" is not somehow more trustworthy than the place he got the information from...

The intent of the policies (and bear in mind what I say above as to how much that holds up..) is that the secondary source is used to filter what in the primary material is considered important to the wider community. To take an example: when Microsoft released Windows 8 there was quite an extensive list of new features. Simply recording that isn't what Wikipedia aims to do, instead you would use secondary sources to highlight the new features that were considered by "experts" to be important, groundbreaking or otherwise worth a comment (of course, the full feature list would be linked to as well).

I'm not arguing this policy is perfect, nor that it doesn't break down in the scenario you cite, but it does have a solid basis.

One other policy is that Wikipedia does not have firm rules (for this very reason) and so you could say that making a convincing argument such as you have should keep the material out. In principle this works, in practice it doesn't but only because of the community dynamics (a whole other problem!).

while it means there is some possibility that the information was gathered in a way that we should indirectly trust, as we can't see it we can't verify it, and it is honestly not in any way better than if some random person on Wikipedia just asserted it to us...

To an extent it does. Because the reporter who you cite has his/her real name attached to the article and has a public reputation to uphold.


> ...ensuring that there is some limit on the scope of Wikipedia. It is this latter one that is the key facet.

Right... but in a world where I can store the entirety of Wikipedia on my mobile phone, you have to trabsitively ask why there is a need a "limit on the scope of Wikipedia". I see no a priori reason why Wikipedia needs or even should tolerate such limits, so one must examine te arguments used to defend that policy.

So far, the only reasonable arguments I have heard (as in, discounting technology problems that never existed: you can easily scale Wikipedia to have a bunch of mostly-ignored articles) come down to "verifiability" through the argument path I elaborated (and which britta seeded), and that is precisely the path used by people defending "deletionism" on behalf of Wikipedia editors.


There are a number of good arguments.

Where does the scope of Wikipedia end? Should there be an article about "saurik"?

How do you actively police articles for e.g. defamation (note, we already struggle to handle this problem and it is getting worse)?

How do you stop spam?

I like to come at this argument from the opposite direction: what need is there for Wikipedia to give an article to every single trivial thing. Is what the president had for breakfast in 2011 sufficiently interesting to the reader?

Wikipedia is not a dump of knowledge, it is supposed to be a curated summary of the sum of human knowledge. And as with an article where you make editorial decisions about the level of detail to go into, so the entire Wiki is scoped to a reasonable level of detail.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: