Hacker News new | past | comments | ask | show | jobs | submit login

I don't think that Microsoft can claim to be blameless here "because it is too hard".

If we have 32 000 copies of the same code in a large database with a linking structure betwen the records then we should be able to discern which are the high provenance sources in the network, and which are the low provenance copies. The problem is after all, remarkedly similar to building a search engine.




There is no formal linking structure in many, if not most cases. Ctrl+V is the weapon of choice of many a programmer. To say nothing of somebody then adding superficial changes to the code to, for instance, fit their personal style or adapt it into their project. And then of course on top of it, Github is not the alpha and omega of code. The original code have been published anywhere, or even nowhere in a case such as theft.

Then there's also parallel discovery. People frequently come to the same solution at roughly the same time, completely independently. And this is nothing new. For instance, who discovered calculus? Newton or Leibniz? This was a roaring controversy at the same time with both claiming credit. The reality is that they both likely discovered it, completely independently, at about the same time. And there's a whole lot more people working on stuff than than in Newton's time!

There's also just parallel creation. Task enough people with creating an octree based level-of-detail system in computer graphics and you're going to get a lot of relatively lengthy code that is going to look extremely similar, in spite of the fact that it's a generally esoteric and non-trivial problem.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: