Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An obsession with eminence warps research (nature.com)
72 points by max_ on July 6, 2017 | hide | past | favorite | 38 comments


I've been both a "toiling in relative obscurity" and "rockstar" scientist. I really didn't mind being the first one, and I don't think my earlier research was necessarily bad, but I was at an undergraduate-focused university where teaching dominated my time, and I did not have the resources to conduct world class research. I am now a soft funded researcher at an R1 and love my new position.

What the author seems to be advocating is some sort of normalization process for the peer review process... it's not entirely clear what she is calling for. I've always advocated for double blind reviews, but it's very difficult for these to work because in small fields such as my own, it's pretty trivial to figure out who wrote the proposal. So it one sense, the rich get richer, but so long as the science that comes out of it is good, so be it! You aren't going to get R1 research done when you have a 3/3 teaching load with no graduate support, that's just life.

I have been on both sides of the review process and I have yet to feel like I was snubbed in my earlier "toiling" days, nor have I, when reviewing, felt compelled to award someone just because they were a "star" - if anything, I might even be more critical when reviewing proposals and papers from "stars". For me, though, I just basically follow the guidelines for evaluating proposals and let the chips fall where they may.


You aren't going to get R1 research done when you have a 3/3 teaching load with no graduate support, that's just life.

Amen.

Should also add that the R1 school I did my graduate work at put funded research in front of didactics every time (thanks to state funding cuts, primarily). The students (and their paying families) really had no idea that these priorities even existed. Everybody gunning for tenure (and with tenure) put teaching in the backseat because, like you said, you have to if you want to succeed in research these days. Where I was at, graduate students ended up carrying most of the teaching load...to the detriment of just about everybody else.


In my experience there is a lot of variance between research fields and even very specific adjacent subfields in how abrasive and star-focussed the process is.


I have to agree with this. The stories I've heard just don't match my own experience from either the reviewer-reviewee perspective. Chemistry (esp Biochem) seems especially nasty for reasons I don't understand (maybe the money gradient is really steep?), while the engineering and math fields were pretty reasonable.

In math and physics arXiv has made a big difference, and the prime focus is really on making publishing much less influenced by the money-men (Elsevier etc). Fame still has an effect, but when the respect of your peers is most important (few outside even realize what you're working on), that gets tempered by disdain at dumbing-down. Writing for a mass audience and/or high school etc is really hard.


Not a fan of the use of "Our" and "We" in this article. There is a long and storied history of researchers claiming the publication/review process is an arbitrary obstacle to sharing information:

>"That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/


Ok, we put "an" above instead. The most neutral of particles.

That's a great quote.


Thanks, TFA is still full of those terms but there is nothing to be done about that.


I don't like double blind review, because it makes it hard to build on your own research. Often when working towards a result you can publish at a "big" conference or a "prestigious" journal you will publish these steps before your "big" publication. Now how do you refer to these steps in a double-blind review? You have to refer to it, and doing so means that either a) the reviewer knows exactly who you are, or b) the reviewer thinks that you add too little to that previous research which isn't yours. The only way out of this is not to publish any of your previous research until you are ready for the "big one" and that isn't good for anyone. So forget about double-blind, it just doesn't work. Neither does blind review work in my opinion, if you are making a decision that might effect my life in such a big way, I have the right to know who you are (maybe not in the real world, but in any kind of process designed to be fair).

The only solution is to make the review process completely open and transparent.


Remember, kids, tenure decisions are sometimes made by counting the number of publications the victim has in Nature, Science, and Cell. Suggesting actually reading them is seen as a shocking, unnecessary idea.


It's discriminatory.

If you read them, you might later want to separate the articles by quality.

(Now, seriously, since so many research institutions are government quasi-government run, separating quality from bias (honest or dishonest) is very hard.)


They have the impact factors, there is no need to check the actual contents.

/s.


Create some kind of publication that accepts only anonymous papers with an encrypted name. When the paper is published the author can reveal the key to decrypt the name.

Thus when the publication publishes a paper, the judgment, however biased, is based on the paper alone, and not the author.


This is called double-blind peer review. Unfortunately, it is often against the interests of the most "eminent" players in a field, who benefit from their eminence leading to gentler peer reviews.


Usually it's very easy to guess the author of a paper in your field.


That is theoretically possible but a simple lexical analysis would give you very strong hints about the author(s).

Not to mention the references, etc.


In many fields it is trivial to tell who wrote a paper if you are at all familiar with the most current work.


All true, but very hard to fix. At some point, you want funding to go to people with a proven track record, and you want to know who such a person is.

That said, double-blind at review time for papers into conferences/journals makes a lot of sense, and does (in my experience) absolutely help level the playing field a bit...


What are the qualities required for someone to have a 'proven track record'? Is it stamina? Work ethic? Perseverance?


In other words, science is susceptible to cliquish behavior. All society is susceptible to this, except that in science the counter-incentives are much weaker. There's no price to pay if you recommend acceptance of a lousy paper just because you know the author.


Exactly the point of the author (who I know). Not just cliquish, though, but faddish and full of misattribution of credit.

I agree that all of society is susceptible to this. I think the problem is that in the sciences, we pretend it doesn't happen.

With music, for example, society has a clear understanding of quality and popularity, and the distinction between the two. There might be arguments about the two, such as how strongly related they are, or whether or not quality is even a meaningful attribute of music, but I think most people understand that they are not the same, in one way or another.

With science, though, we act as if they are the same. There's an assumption that, sure, there's some noise, but in the end it all evens out and attribution is correctly given and good ideas rise to the top. No corruption, taking or giving credit inappropriately, no cognitive errors or biases, no nepotism, no etc.

The set of assumptions underlying science reminds me a lot of homo economicus in economics. A set of assumptions that are almost certainly untrue, with significant consequences, but which we tend to ignore out of convenience. (I actually think there should be more skeptically rigorous decision-theoretic/economic analysis of different scientific systems and structures, like game-theoretic analyses of different scenarios as they play out in science.)


This may be naive but why not use something like PageRank as an objective measure of a paper (and the author's) quality? Of course this can be computed only after the paper is already published...


Fun fact: the eigenvector centrality computation in PageRank was originally used for citation networks many years before PageRank.

A surprising number of web spam problems have parallels in academic publishing: spam rings, "rich get richer" problems caused by paper search engines, and outright demands for self-citations from editors to boost a journal's impact factor.


For the past decade or so, I've often wondered why, given the informational tools available, research is condensed into 'a paper'. I hate reading 'papers' because there's so much left out. I want to know about the initial hypothesis, the drivers, the paths taken, the false starts and changes in direction, and of course the failures. There's so much value lost. The 'paper' seems the scientific equivalent of the photoshop edit.


I agree, which is why I'm building https://pubrank.carbocation.com to do just that.

One of the more challenging things about PubMed and its 27 million indexed papers is that there is virtually no sense of a unique author identifier, which has made that aspect somewhat of a research project for me. (It's an active area of research on which a few CS papers have been published. I'm trying to hack through it with some older statistical methods for a "good enough" solution.)


You mean count citations? Is being cited a lot the same thing as being high quality? What about a paper that is continually refuted by other papers? Is that a high quality paper?


I believe that system existed before pagerank, and always assumed that Larry and Sergey were inspired by it.

EDIT: CiteSeer is what I was thinking of (1997).


Perhaps the word should be "notable" rather than "high-quality." A paper that is repeatedly refuted must have had some intellectual impact if others have bothered to refute it in so many ways. PageRank would also take into account the quality or notableness of the refuting papers themselves, so this cannot be reduced to a mere count of the number of refutations. That means a highly-ranked, oft-refuted paper would have been refuted not just by anyone, but by other high-quality or at least highly notable authors.


Like impact factor? This is done already and it is very flawed.


It's been done:

http://www.eigenfactor.org/

I don't think tenure committees pay attention to it because it's not as widely accepted/established. Individuals still on the tenure-track career path are welcome to correct me if I'm wrong though.


Nvm. Eigenfactor is at the level of journals, not individuals- my brain doesn't work today.

Someone did try to calculate some centrality measures for researchers in neuroimaging though:

http://cos.dery.xyz/

This would also probably not be taken seriously by a TT committee.


I think this dovetails somewhere with the scientific publishing discussions that have been popping up on HN.

I’m not an academic, so I don’t have very strong opinions, but from just reading about it at a surface level it feels like there are solvable problems in that system besides copyright and profit margins.

Publication builds “eminence” (publish-or-perish), determines eminence and (as this article suggests) is determined by eminence. Meanwhile, the system of publication is based against null results & repetitive/conformational experiments. This actually keeps valuable data out of the scientific body of knowledge in ways that harm the mission.

There are other issues that have more to do with the book-like format of articles. Could review be separated from publishing (so that publication can be multiple)? Could publications (that do not emit null results) be structured in a way that embeds meta-study by default?

Basically, what does science want/need from publication. Is it just the current model but free (beer & love) or something bigger?


They want to communicate their research to other researchers. Different scientists do not agree on the best format for this.

They need to have high impact publications to hold onto their jobs.


And probably deters a great many people from pursuing the basic science altogether.

"I know all these people who are way smarter than me and who work way harder than me. And they're not famous yet. I'll never stand a chance!"


Lots of very smart, very hard-working people in most other fields are not famous at all. Why should science be any different?


I think part of what she's saying is that currently, fame is implicitly or explicitly the goal and standard by which scientists are evaluated, but that it shouldn't be.

A lot of comments on this article so far are focused on blindness of the review system, control over publications, etc. which is definitely relevant, but is only half of the story.

The other half is the reasons why papers are popular, and how credit is attributed in papers, grants, and other research.

First there is the question of why papers are published, then why they are cited, and then how peers mentally attribute credit in those cited works. At each step of the way, there are problems: there are biases in why papers are published, why they are cited, and how credit is attributed. A similar process happens with grants.


> In fact, a couple of biomedical researchers have proposed that grant reviewers should strive to identify only the top fifth of grant proposals, with the final awards decided by lottery

This works if there are enough funding opportunities that you can have repeated trials. If you only have the same funding opportunity once a year... it may also be a disadvantage.

But overall, I support her movement.


From Stan Kelly-Bootle's brilliant "Computer Contradictionary":

symposium, noun (from Greek, "to drink together") A gathering of scholars where each attendant is intoxicated by his/her contribution and sobered by the lack of response.


> What's the solution? ...admit up front that judgements of eminence are often subjective. ...read people's work and evaluate each study or proposal on its merits.

> One trick I use to avoid status bias is to keep myself blind to the authors' identities as long as I can — a strategy that many journals in social and personality psychology have also adopted. Once I tried this, I realized just how much I had been using authors' identities as a short cut. Assessing research without this information — knowing that I might be harshly criticizing a famous person's work — is nerve-wracking. But I'm convinced it's the best way to evaluate science.

I'm not a full time academic, but having written and reviewed papers for SIGGRAPH and other journals for many years, I've been thinking about this problem. A lot of people I know are jaded about the paper review process and feel like it's infested with politics. SIGGRAPH has a double-blind review process, but it still has this problem, not knowing the identities of authors or reviewers doesn't solve it entirely. I don't know how much it helps to be double-blind and I don't know how to quantify how much it helps, but it's widely believed to improve things.

One of the problems, even in a double-blind system, is that it's still often fairly easy to deduce the organization that the author came from, especially for groups that have attained some prestige. The research specialty and the language and figures and examples in the paper sometimes make it blindingly obvious who's Ph.D. student wrote it, and almost everyone on the papers committee is more familiar with the various groups and their work and history than I am. As a paper author, I've been able to correctly guess and verify several of my reviewers over the years. Sometimes, reviewers are made aware of the origins of a paper behind closed doors. It's discouraged and ugly, but it still happens.

The "solution" rattling around in my head that I would like to try out is simple:

- Require reviewers to rank a set of pairs of papers against each other. For each pair, pick the paper you'd rather see accepted.

- Send papers to many more reviewers, and randomly. (SIGGRAPH is 5 reviewers assigned via social network, for example. How about 30 random reviewers instead?).

- Ask for less feedback from reviewers, instead relying on greater numbers of ranked vote pairs. This will both reduce the workload on reviewers, and allow more data, to reduce noise and make the result more stable.

Once the reviews are in, a ranking algorithm would relax all the ranked choices and produce a list of papers in order of votes, and the top N would be accepted without further debate.

Having done lots of reviews and participated in discussions over reviews, one of the problems I see is that papers are given absolute scores - they're not compared against each other for score. Some amazing papers end up with a low average score, and some mediocre papers end up with a high average score. Each paper's score ends up in it's own unit system, despite efforts to normalize the range.

Not sure if it'd work, but I've been wanting to try it for a while.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: