No. This was kind of true in the days of CVS and SVN. Now with DVCSs like Git, it's easy enough to merge in changes from wherever; this was the whole reason Tom Lord originally developed the modern DVCS. Git workflows commonly make a new fork for every bug fix. If other people don't want to merge in your changes, they probably don't think they're good enough. But that doesn't matter; you can still use them. If they're wrong, it's their loss.
So, it's true that refactoring isn't going to merge automatically with either feature additions or other refactoring of the same code. But if the code was improved by the refactoring, probably the person making the other change will want to merge it in before making their change. There are cases where clashes arise between different visions of what “improvement” means — XEmacs was born that way — but in those cases it's obviously best for people to have the choice of which version of the code they'd rather work on, rather than bullying one version of it out of existence.
In many cases of profound changes, different versions of the software do a better job of serving different groups of users. For example, EVE Online is built on Stackless Python, a fork of the CPython interpreter that enabled massive concurrency. This was really important to the EVE Online developers, so they were willing to accept tradeoffs that the core Python developers didn't think were good ones. Most people have gone on to use the stock interpreter, rather than choosing Stackless, so probably the core Python team made the right choice for most people. But that's no reason to deprive EVE Online of the ability to try a different approach.
Similarly, when I installed my first Linux box in 1996, there was an "NFS swapping patch" which made it possible to swap onto NFS, which was impossible with stock Linux because it meant that many operations that were normally "atomic" could no longer be atomic. (This was before SMP support, so the kernel didn't have locks; atomicity was enough.) The patch introduced a new "nucleonic" priority level and redefined the notion of atomicity, adding a lot of complexity to Linux to support the marginal use case of running diskless X-terminals and the like on Linux, rather than, say, SunOS or NCD's shitty imitation of VMS. This was not a good tradeoff for the majority of users, and it did not merge well with changes to network drivers. But for a certain subset of users, it was extremely valuable and worth the tradeoff. This complexity was eventually added in a better way when Linux got real SMP support.
Similarly, LuaJIT prioritizes speed (and secondarily minimality), while PUC Lua prioritizes portability and minimality (and secondarily speed), currently being about half the size of LuaJIT and one tenth of its performance. LuaJIT is, generally speaking, about as fast as C, but originally it was i386-only, later adding amd64, ARM, PowerPC, and MIPS support; it still doesn't support aarch64, which is what most new hand computers use. PUC Lua, by contrast, runs anywhere there's a C compiler and a few hundred K of memory. Losing either of these two projects would be terrible.
So, no, forking projects should not be a "last resort." Being able to fork projects is one of the core benefits provided to users by open source. It's the reason Tom Lord invented the DVCS as we know it today, in the form of Arch, and it's the reason Linus wrote a DVCS to use instead of Subversion: to ensure that the users' freedom to fork their software didn't become merely theoretical.
I never used BitKeeper or TeamWare and consequently don't know how their capabilities were similar to or different from arch's at the time. Did you use them?
Tom had a pretty clear vision for arch which is more or less what Git ended up achieving. Everything he wrote publicly about it derived the feature requirements from his social agenda; he never mentioned BitKeeper, and I don't think he knew about it. He must have known about TeamWare, but I don't know if he had ever used it, and from the little I can glean, TeamWare lacked some very significant things crucial to what we think of as a DVCS today. But maybe I'm wrong about that.
No. This was kind of true in the days of CVS and SVN. Now with DVCSs like Git, it's easy enough to merge in changes from wherever; this was the whole reason Tom Lord originally developed the modern DVCS. Git workflows commonly make a new fork for every bug fix. If other people don't want to merge in your changes, they probably don't think they're good enough. But that doesn't matter; you can still use them. If they're wrong, it's their loss.