I what is missing from the article is an understanding of the incentives of academics. Quite often putting a piece of research "to bed" is the goal. Why? Because any good academic has a long list of things they want to get to.
Very few people want to get mired in the (inevitable) problems that arise in a paper for eternity. This is why there are review papers that summarize the state of the knowledge at a given point. These articles are useful summaries of what came before and what (at least appear to be) dead ends were found.
Maybe I'm missing some point with the author's article. But getting units of production out and finished (i.e. papers) is a useful process. I'm not sure what would be gained by keeping documents editable forever. Scientific literature is not code.
To me, the main problem with papers in their current shape is that they are required to be more or less self-contained. When one wants to state a result that improves a little bit the knowledge in a well established field, he has to waste time and space stating the definitions and preliminary results necessary to understand his result. This is counterproductive both for the author and the reader interested only in the small new bit of information.
If papers were collaborative, one could simply propose his improvements directly where they fit in the reference paper, without having to write one from scratch, and readers would be immediately aware of these follow-up results, without having to search in dozens of papers.
I don't know. I think the act of forming a paper up from scratch is an essential element in the process of research. It really isn't a waste of time and space in the sense that proper research will require these definitions and preliminary results be carefully reviewed by the author anyways. Why not have them type it out for us? Or for their own sake even.
A wiki style database of information could be an exciting resource for kicking off new ideas. When scientists conduct research within this database though, findings and methodology must be to a high enough standard. Marks of high quality research include well defined terms and contextualized prior results.
If you want to get straight to the new information, read the abstract up top, skim the middle and read the results and discussion.
I completely agree that serious researchers should review definitions and results they are basing their findings upon. But then, I'd find much more reliable an editable reference paper corrected and improved by a significant number of people working in the field, rather than an old-style paper, published decades ago, after a (botched ?) review by a couple of anonymous reviewers, and not revised since then.
This would address also a little bit the problem of notations: if everybody agree and work on the same piece of work, they are likely to adopt the same notations, providing a nice coherence to the user.
A wiki style database has many advantages when it comes to organizing and searching information. In my opinion, though, the editing process should be closer to GitHub's pull request (as advocated by the article), to ensure that everything is properly reviewed before publication.
Finally, a publication scheme like the one I described also address a recurrent issue with traditional citation-based papers: citations are one-sided. It is easy to see which papers one article depends upon, but the converse is hard (unless using specific tools, at least). With collaborative editing and wikipedia-style links between articles, the reader is immediately aware of the latest findings in the field, which simplifies tremendously the bibliographic research.
There is a tradeoff here. If you want terse, field-specific papers without introduction and/or definition of terms they will be essentially incomprehensible for those not knowledgeable about the field. If you want things to be more generally accessible, a brief (!) explanation of the point and basis of the paper is helpful.
That said, I would like to see papers that actually provide a return for the time spent reading them. Teach me something that I can use if I'm familiar with the discipline. Make sure it's actually generally applicable rather than an artifact of the data. Do proper statistics and/or testing of the idea. Show it's not just an LPU (least-publishable unit) that resulted from running tutorials from the vendor.
There's a tremendous push to publish as it's the currency of academic and much professional life. I've done methodology all my working life, and the good papers are joys to read. They provide insight for new techniques, things which I can understand and apply and build upon. There's test data, so I can check that the paper and any work I do based on the paper is robust.
They unfortunately quite rare. I think editors use me as a hatchet man for me-too papers, or that's all people write anymore. Yeah, the goal is to get one's students trained and employed and the grants obtained but please, please write things that are worth the time of reading them.
Most of the great scientists worked on a single idea their entire life and expanded upon it, improved it, corrected it and were never really done with it (einstein comes to mind). Scientific knowledge is never final
I personally have a huge beef with the way life scientists publish their results in tiny tiny bits, that makes it extremely hard to cross-check them with other studies, to find out if they were later disproved or even to figure out the average of some quantity. Try making a concrete model of something and you will find yourself searching for experimentally measured values in tomes among the blabber of introductions and discussions, like monks did in the middle ages... Databases sometimes exist, but they are not mandatory so you never know if they are to be trusted. We need a structured way to catalog scientific results , and journals are not it (esp. with politics which lead scientists to publish in marginally related journals ).
That's not necessarily true, and I'd be incredibly worried if it were true. I think it's more an artifact of the design of this version of 'science' as an institution.
Freeman Dyson was a brilliant drop out mathematician who happened to meet Richard Feynman and demonstrated a proof for Feynman's work (which led him to winning a Nobel). He says,
"I think it's almost true without exception if you want to win a Nobel Prize, you should have a long attention span, get hold of some deep and important problem and stay with it for ten years. That wasn't my style."
Here's a list of Dyson's awards: Heineman Prize (1965), Lorentz Medal (1966), Hughes Medal (1968), Harvey Prize (1977), Wolf Prize (1981), Andrew Gemant Award (1988), Matteucci Medal (1989), Oersted Medal (1991), Fermi Award (1993), Templeton Prize (2000), Pomeranchuk Prize (2003), Poincaré Prize (2012).
I think it's in fact very true and there is a very good reason what that is the case.
Most expert know more and more about less and less because in order to understand something you need to dig ever deeper to understand the specifics.
What is needs is expert generalists that are able to understand several fields well enough to see where knowledge from one area can lead to understanding in others and vice versa.
Expert generalists are precisely the thing that's missing, especially with how younger scientists are trained. It's harder to fund an expert generalist.
Freeman Dyson is a great example because at 89 he published the solution to the prisoner's dilemma, decades later than the original theoretical physics work.
Dyson truly proposes a great solution lying in PD. That is a huge oversight for game theorists when they let an outsider do that.
However, it's not the ultimate solution. It's an extremely interesting aspect (a strategy, a style of play) of the PD that game theorists somehow have never seen and articulated in maths.
For the evolutionary fitness of the extortioner solution that Dyson discovers:
You don't really understand a field without digging deep. A great example is how much students are willing to trust surveys before and after they do a large one. Another is mice running mazes, the numbers may look nice in theory, but they can hide a lot of problems.
I totally agree but the question is what constitutes a field.
I once read about a whole series of areas where solutions to issues in one field happened from understanding from understanding another field. Unfortunately ver few people span over multiple fields and can call themselves experts.
So the questions is whether we could spread out the expertise to more in between studies and letting go of literature as the only way to collect knowledge might be a great first step.
Honestly, University's or other places where people in diverse fields collaborate seem like the solution to this problem. Because, there are a lot of fields out there and 2^N sucks, but conversations take less effort than a deep dive. Bell labs comes to mind as a private sector version of this.
The problem is that this collaboration does not happen because people are mostly "trapped" within their own field. My point is simply to remove the idea of literature as the container of knowledge as if knowledge is like literature (i.e. once written saved as a piece of knowledge)
Or put another way.
Instead of modeling knowledge after the way our brain best deal with structure we should leave the structuring to the machines and start approaching it more like an organism that can be explored. I think we are bound to see something along those lines soon.
Maybe it worked for him; I have found that people who tend to do the most important work in my field are dedicated and believe in their work, instead of perpetually looking for something new but unimportant to publish.
What was Einstein's single idea that he worked on? The photoelectric effect, for which he won the Nobel prize, or special and general relativity? Or statistical mechanics like Bose-Einstein statistics? Because it seems to me like he worked on many ideas during his life.
Certainly. But it was not the only idea he worked on, as I commented.
Perhaps I'm reading too much into it, but I interpret "Most of the great scientists worked on a single idea their entire life ..." as working on a single idea, to the exclusion of others.
Otherwise the statement would be "Most of the great scientists worked on an idea their entire life"
Marie Curie: received Nobel Prizes in Physics for her work in radiation and in Chemistry for her work in radium and polonium. If the idea here is "radiation" then Einstein's idea was "physics" and Darwin's was "biology"
Alan Turing: died entirely too young. Best known for early work in computers. Then switched to mathematical biology, specifically morphogenesis.
Niels Bohr: perhaps "nature of the atom" or "quantum mechanics"
Max Planck: Black-body radiation and special relativity
Charles Darwin: evolution (not really a small idea)
Leonardo da Vinci: not applicable
Galileo Galilei: astronomy (I can't think of a smaller "idea" for his body of work)
Nikola Tesla: umm, "electricity"?
Albert Einstein: see above
Isaac Newton: optics, gravitation, physics .. and then the Royal Mint, counterfeiting, and a whole lot of alchemy.
It's hard to conclude that these people worked on a single "idea", though some worked mostly in a single field.
I may have not stated it well, i did not mean an idea to the exclusion of others but rather that they but that they didn't stop working on every single idea they took upon for a long long time. From the list you give, there arent any scientists who put their research on a subject "to bed" , unless it proved a failure.
Umm, Newton? The latter part of his life was research in alchemy and biblical chronology.
If you say that Newton "worked on a single idea [his] entire life", then what was the idea?
In any case, the observation is incomplete. All great scientists slept at least once a month doesn't mean that sleeping at least once a month is a distinctive attribute of great scientists.
If most great scientists work on a given topic all of their scientific career, is that not mostly because most scientists do the same?
That's not true -- not even for Einstein. His work on relativity was arguably his greatest, but remember that he won the Nobel for his work on the photoelectric effect. He also did a lot of work on particle movement, i.e. Brownian motion.
We often have the idea of "solved problem" in science, but very few of them in software development (overgeneralizing a bit, any field of technology). This is probably because science is often about "what" (what is the fastest algorithm for matrix multiplication?), "whether" (does P = NP?), while technology is often about "how" (how to implement matrix multiplication efficiently?).
Once you solve a problem in science, you solve it for good. But in technology there can always be "better" ways to solve it, and things keep evolving.
When Darwin "solved" the question of evolution, he did not solve it for good. There has been a lot of work to make it a better, deeper, and more powerful mechanism for understanding biology.
Science doesn't solve problems in that sense, Darwin didn't solve the evolution question, he proposed a successful theory to explain it and that core idea remains true though modern biologists have gone much further than Darwin ever could as his time was before DNA. To use the word solved just wouldn't be correct in any sense of the word.
That's just simply not correct - scientific endeavour and the scientific method is an ongoing prospect. You never actually solve anything, instead you refine the parameters in which a given solution is deemed correct.
Newton "solved" the equations of gravity, Einstein "solved" them too, and now quantum mechanics is "solving" them yet again. None are wrong, but also none of them make the problem solved.
The idea of a "solved problem" in science is a dangerous one IMHO.
I should have used a more accurate word than "problem".
There are two sorts of problems in science: coming up with assumptions and answering questions under these assumptions. I was thinking about the second category. Modeling gravity falls in the first category, where problems are a lot like engineering problems - there is no final answer.
However, problems in the second category often have final answers (let's not bring Gödel's incompleteness theorem to the table) once you bring all the assumptions with you. Think about: in classical mechanics, what are the possible planet trajectories? This problem has already been solved, and since it is solved, it is solved for good. Later students can simply learn the solution by heart. You can, of course, insist to go through all the trouble of finding out the answer - that is what physical majors often do anyway - but the point is that you don't have to as long as you trust the science community.
In software development you don't have this luxury. First, you can almost never trust libraries to be 100% bug-free; second, even it really is correct, you always suffer from performance costs from invoking the library. There is no such thing as "performance" in scientific knowledge; indeed, a short proof is better than a long one as long as they are both correct, but they are just as useful in that they show the solution is correct.
All that is known in the entire field of mathematics. But yeah, mostly just mathematics. I think sorting algorithms is a solved problem, but you can say that's just math as well.
Well, Newton's laws of motion suffice for non-relativistic motion. Maxwell's equations, I think, suffice for typical electronic/magnetic fields. Judging from all the labs I did or TA'd back then, we have a pretty good handle on friction and inclined planes. While there's a lot of "science at the edges" that's ongoing, much of the everyday, classical things have a suitable solution.
As others have written, modern Engineering could not be done without the existence of such working solutions.
I think that is true, which is why what this author should be suggesting is changing the incentives of what the 'output' of a scientist should be.
Literature is great because it measures how far you're progressing the measuring stake of knowledge in a field. Old distribution models like journals just help disseminate that "hey everyone, here's the new mile marker". You then get paid based on how far you manage to advance it.
I think while that should remain true, it can definitely use some rebalancing. It's incredibly lopsided and inefficient, and too many nodes in the process are hoarded and centralized. In this case, I think science should more follow art.
Very few people want to get mired in the (inevitable) problems that arise in a paper for eternity. This is why there are review papers that summarize the state of the knowledge at a given point. These articles are useful summaries of what came before and what (at least appear to be) dead ends were found.
Maybe I'm missing some point with the author's article. But getting units of production out and finished (i.e. papers) is a useful process. I'm not sure what would be gained by keeping documents editable forever. Scientific literature is not code.