To me, the idea that they went back to a thirty-year old algorithm is the one credible thing in the whole story. Almost nobody reads old papers in computer science.
Somewhere out there in the libraries are the computing equivalents of transparent aluminum, but you can barely get researchers to look at this stuff, let alone Joe Javahead.
Good point. And actually from another post in this thread it sounds like it is more than a patent/algorithm, it is a painstakingly built system. So yes perhap they have something. We will see.
As for you comment about researchers not being aware of the literature, I agree 100%. I review from time to time and the number of papers that are reinventing the wheel (and doing so in a sloppy way) is staggering. I think the problem is that too many researchers are just concerned with building up as many papers as possible to beat the tenure clock and/or to impress their rivals.
Evaluators somehow need to stop bean counting publications as a measure of merit. The problem they face is they don't know how else to evaluate...
Somewhere out there in the libraries are the computing equivalents of transparent aluminum, but you can barely get researchers to look at this stuff, let alone Joe Javahead.