It seems the problem is that the author is trying to create a representation of knowledge, whereas research papers are more of a logging of work done.
His approach may be a perfect fit for an experiment in meta-research. You could run periodic reviews of articles released in specific topical areas and create summaries of the findings. Any time those findings change, its a git commit, and you can track this change over time. It's something like Wikipedia, but designed for research. I would not be surprised if this already exists.
I see a challenge in figuring out the document structure which will be the most conducive to distributed version control (pull requests, etc), and can provide some insights on a historicalbasis. For example, you could run these summaries retrospectively, say looking at DNA research in the 1960s, and do a separate commit for every key finding through the years. It seems to me that you would pretty much have to specify a specialized coding language for scientific knowledge for this structure to work.
Exactly. The problem is that papers aren't worth what the rest of the world think they are. Nobody who is working on the cutting edge of a given field give a damn about the papers that are published in the field's major journals. Papers are little more than permanent records of what people talked about at some conference several months before, and through other informal channels even earlier. By the time they're published, they're already old news. They may be worth some archival value, but that's about it.
Unfortunately, the rest of the world thinks of papers as the primary method by which scientists exchange ideas. This is a myth. Sure, there some scientists (mostly in developing countries, or those coming from a different field) who rely on papers to figure out what their colleagues are up to, but if so, that's only a reason to improve real-time communication, not a reason to turn papers into real-time communication tools.
This and the previous comment are /exactly/ right, especially: "... that's only a reason to improve real-time communication, not a reason to turn papers into real-time communication tools".
This sounds like an update to the format of textbooks. They sort of work like this already: as new versions come out, they include the latest revisions in the field. On the other hand, they also serve their publishers by putting out new versions with little or no improvement to continue earning money.
His approach may be a perfect fit for an experiment in meta-research. You could run periodic reviews of articles released in specific topical areas and create summaries of the findings. Any time those findings change, its a git commit, and you can track this change over time. It's something like Wikipedia, but designed for research. I would not be surprised if this already exists.
I see a challenge in figuring out the document structure which will be the most conducive to distributed version control (pull requests, etc), and can provide some insights on a historicalbasis. For example, you could run these summaries retrospectively, say looking at DNA research in the 1960s, and do a separate commit for every key finding through the years. It seems to me that you would pretty much have to specify a specialized coding language for scientific knowledge for this structure to work.