I obviously know who is behind bittorrent that is why I mentioned it as a possible alternative. But you yourself state that Stanislav Shulanov is the man behind utp. More importantly shout-outs traditionally explicitly reference the person/thing in question. I was not sure if there was a longer/muddier/more-controversial back story that would not be appropriate to discuss in an acm article.
While other languages claim they can be used for such things, I have yet to hear of it being done in a real deployed piece of commercial software, despite the obvious security benefits. I'd be interested in hearing any success stories.
For some context, Git follows the same architecture as Codeville, so Linus didn't invent the idea (although he ripped it off from Monotone, not Codeville) and the argument was essentially about whether a simple three-way merge can be used in all cases, and the answer is no, because of criss-cross cases, and solutions for that have since been put into Git. It is the case that semantics which more closely resemble three way merge are preferred though, as explained in my post which this thread is about, but for reasons which nobody in the flame war you link to appreciated at the time.
I am not considering 3rd party patents, but Apple's. If Apple includes some patented tech into a contribution to LLVM or Clang, what guarantees I have they will not sue non-Apple downstream users?
Then I suppose the community can fork from the revision prior to where that patented technique landed in the tree, just as with any other OSI-approved license.
Depends on one's perspective. Not long ago the meme de jour was that open-source transparency & fork-ability was the ultimate protection to corporate evils.
By the standard you're setting, the only safe compiler would be one you wrote yourself and revealed to no one. What compilers do you use?
While the author of this paper does not appear to be a crank, nowhere in the entire paper does it discuss why the fundamental barriers of naturalization, algebrization, and relativization don't apply to the work, making it seem unlikely that those barriers have actually been overcome.
From what little I understand, those barriers prevent only certain proof strategies from working. So for instance, the Razborov-Rudich barrier concerns a class of combinatorial proofs (the so-called natural proofs); this paper uses two techniques - statistical mechanics and model theory - which I gather are out of the province of RR.
"only certain proof strategies" is technically correct, but its closer to "essentially every proof strategy we can conceive of".
And besides, the question is over the entire proof strategy and not the specific techniques involved. It seems plausible that one could give a relativizing proof using some method of calculation from statistical mechanics, for example.
Taking a methods from a different domain in no way shows that the mechanics of those method don't reduce to the same mechanics as a "natural proof". Further, it makes the actual process much more obscure.
Indeed, statistics in general seems like a hard approach for overcoming the Razborov-Rudich limit, since the limit is on showing that a "typical" function has certain properties and anything statical seems like it would be "typical".
But who knows really, maybe Razborov-Rudich isn't correct, since in fully generality it relies on things that "mathematicians generally believe" (the existence of certain pseudo-random functions) rather than things that are definitely proven - as far as I know.
Ah, that is not how those "barriers" work. Roughly, the relativization barrier goes like this: Say you have a proof that P!=NP. Does it also prove that P^A != NP^A for any oracle A? If it does, then the proof is flawed, because there <i>does</i> exist an oracle A such that P^A = NP^A!
Such proofs are said to relativize -- i.e., they are still valid relative to any oracle.