Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, actually I was not only having attribution errors in mind, too. Peer review ensures that colleagues will tell you about related work that you don't know about. Sometimes, people will tell you that something is related, even though you yourself don't actually think it's related work. Only with some time and acceptance, you will see that the remarks are really related, probably not directly to your own contribution, but to the bigger field that you orient yourself in.

Come to think of it that this is probably the most important detractor for having an NLP based "recommender." Personally, something like this might be interesting, probably even a great help, but at the end of the day, people need to really read a lot of papers, follow the proceedings of their target conferences, journals, and asking colleagues for their bibliographies. This has the added benefit of teaching them how to present their own work in contrast to others, do meaningful evaluations (in the best of all worlds, of course!) and figure out who is doing interesting work and might be valuable to get into contact with. Of course, some parts could be automated, but there is currently no incentive for scientists to do so.

IMHO, it would be a much more important step for CS researches to publish their code, too, because I frequently come across papers that have no implementation or evaluation at all--and that's really bad, because then the least-publishable unit becomes an idea with nice pictures. Researchers can be very successful using this "publication strategy." Come to think of it, there should be another approach to rank scientists by the number of publications, or their impact; unfortunately, I have no idea what could work instead.



Peer review ensures that colleagues will tell you about related work that you don't know about.

Not really, because other researchers are advancing their careers based on how often and how much they (a) publish and (b) get cited. So the colleagues most likely to be in a position to review your work are those who got cited a lot, who would primarily know about their work that got cited a lot.

The application of a relevance/NLP/PageRank-like additional-citation recommender program could come as a step in the reviewing process. Rather than having just human reviewers suggest further reading, a "machine reviewer" would as well, placing the query results in front of everyone involved in publishing a paper.


I totally agree about the importance of scientists to publish their code. That is critical. It's one of the many parts of the scientific process where the community would benefit from greater sharing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: