Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is true for feature driven projects where features are added on top of exiting code and only some existing code is changed.

But you will have a very hard time merging two refactoring branches that both touch internal data structures.



So, it's true that refactoring isn't going to merge automatically with either feature additions or other refactoring of the same code. But if the code was improved by the refactoring, probably the person making the other change will want to merge it in before making their change. There are cases where clashes arise between different visions of what “improvement” means — XEmacs was born that way — but in those cases it's obviously best for people to have the choice of which version of the code they'd rather work on, rather than bullying one version of it out of existence.

In many cases of profound changes, different versions of the software do a better job of serving different groups of users. For example, EVE Online is built on Stackless Python, a fork of the CPython interpreter that enabled massive concurrency. This was really important to the EVE Online developers, so they were willing to accept tradeoffs that the core Python developers didn't think were good ones. Most people have gone on to use the stock interpreter, rather than choosing Stackless, so probably the core Python team made the right choice for most people. But that's no reason to deprive EVE Online of the ability to try a different approach.

Similarly, when I installed my first Linux box in 1996, there was an "NFS swapping patch" which made it possible to swap onto NFS, which was impossible with stock Linux because it meant that many operations that were normally "atomic" could no longer be atomic. (This was before SMP support, so the kernel didn't have locks; atomicity was enough.) The patch introduced a new "nucleonic" priority level and redefined the notion of atomicity, adding a lot of complexity to Linux to support the marginal use case of running diskless X-terminals and the like on Linux, rather than, say, SunOS or NCD's shitty imitation of VMS. This was not a good tradeoff for the majority of users, and it did not merge well with changes to network drivers. But for a certain subset of users, it was extremely valuable and worth the tradeoff. This complexity was eventually added in a better way when Linux got real SMP support.

Similarly, LuaJIT prioritizes speed (and secondarily minimality), while PUC Lua prioritizes portability and minimality (and secondarily speed), currently being about half the size of LuaJIT and one tenth of its performance. LuaJIT is, generally speaking, about as fast as C, but originally it was i386-only, later adding amd64, ARM, PowerPC, and MIPS support; it still doesn't support aarch64, which is what most new hand computers use. PUC Lua, by contrast, runs anywhere there's a C compiler and a few hundred K of memory. Losing either of these two projects would be terrible.

So, no, forking projects should not be a "last resort." Being able to fork projects is one of the core benefits provided to users by open source. It's the reason Tom Lord invented the DVCS as we know it today, in the form of Arch, and it's the reason Linus wrote a DVCS to use instead of Subversion: to ensure that the users' freedom to fork their software didn't become merely theoretical.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: