Doing exactly the opposite is also a right approach:
Absorb as many features as possible, so that more people will be gravitated into a centralized place. For bugs, if the original feature submitter isn't around, potential new maintainers will emerge and fix them.
Most projects developed this way die of development resources starvation. Once you have added all those features you will face all the consequences of developing a complex system -- difficulty to get anything done, new devs requiring substantial amount of time to get started, people getting discouraged quickly, etc. As you add complexity, the ways users use your platform will grow exponentially -- potentially causing an explosion of bugs.
I don't mean users using your platform in many different ways is bad, the question is "Can you support it?"
If you run into developer resource starvation users will find that you are making no progress and will find an alternative and this will make your project even less enticing for contributors. This is how a project dies.
If you are single developer and want to do something useful for community your best shot is to make something simple and add things judiciously.
So how do large open source projects survive and how are they different? The issue is that when it comes to contributors, relatively few large projects receive most of the contributions. Everybody wants to contribute to something well known, high profile, used by a lot of people. Not many developers want to make it their mission to support a useful but an obscure plugin to something else.
I'd say precedent is against you here. There's many examples of widely used open source projects that don't get lots of free maintenance effort.
Maintaining something well is hard and requires commitment, you can't just crowd source it with people dipping in and out with the occasional fix and new feature. Being careful about what PRs you accept, especially when it's a big new feature is crucial to the health of an open source project and the mental health of its maintainers.
Making this will exponentially increase the glue code required to tie everything together.
So, while new developers fix the features, they'll also need to fix that glue layer too, making the work twice or thrice as big. Also things will also even get more complicated as long as things get added into the mix.
This is why UNIX philosophy is very important. Keeps software simple, compact and much more easier to maintain.
It can work but you have to have a vision on how to integrate extra features, not just shove them into the codebase.
Consistent plugin API allowing to plug-in code in various parts of the app takes a bunch of effort but now you're magically separated by a wall of code from the contributors, as you're never a blocker for someone's else effort to add features, and the stuff you don't want to maintain can land in contrib repo
For all the negative responses you get, there are a bunch of open source projects that work kinda like this.
Lazarus, and to a lesser extent Free Pascal, are basically like that. New features are added often and bug fixes to code written by others who aren't around (or don't have time) are submitted all the time (e.g. personally i'm more of a user than a developer of the project but i always use the git version built from source so that i can quickly fix any bugs that annoy me). The codebase is also far from small and the project is an IDE that tries to do "everything", so far from "doing one thing".
Lazarus has been around for at least a couple of decades, so i think that shows projects do not necessarily die when doing that.
It might have to do with not having some big company push their weight around so it is largely community developed and driven. Also it is by far the most popular IDE for the language it is written on, so perhaps it is easier to find contributors.
This may work as a strategy for a VC-backed startup, where you just throw money into resources to create the largest possible gravity for your product, and solve the technical debt by growing your team once you have a critical (paying) userbase.
But how would this work when there's no money to throw around, your product is free and every contributor is actually a volunteer who still needs to earn his living elsewhere?
The other factor to consider is how features complicate unrelated maintenance and new development. If you accept new features which are affect other parts of the codebase, you might find that the things you are finding volunteers to work are made harder due to something most people don’t use and thus aren’t jumping to spend time supporting.
Years back I worked on a search engine abstraction library for Django. I’m not sure the concept is affordable at all - search engines are less alike than SQL databases — but one thing we constantly had problems with was that most volunteers only needed one backend but any new abstraction would need to be implemented for at least 3.
Jabref, which use .bib file as storage. The importing features are almost the same (all major sources supported). Tagging and note taking are nice. Full text search is implemented with the mighty lucene. Word plugin and Browser import plugin are good too.
Both zotero and jabref are on similar level of simplicity. However, they are very different on details if you use them heavily.
Perhaps I'm misreading your question, are you asking why they invest in fonts?
If so, the answer is that fonts are a major underpinning of how an organization or a product is presented to the world.
To make an analogy that the hn crowd might understand, the value of a typeface for a design system is like the value a cryptographic primitive or a library like openssl to a functioning security system.
Nobody (aside from designers and typographers) will think "oh wow that font is so great they must have invested a lot into it", but it underlies every written visual interaction between the brand/product/company and its audience.
That said, are there many other fonts available that fall into the appropriate style, usage, licensing terms, etc that GitHub need? Perhaps. But uniqueness is a value in itself, and having a decision about a set of fonts that the company uses is a pretty important thing.
It's like the 3 spaces or 4 spaces vs tabs discussion. Some people use one, other people use another. But in a company, having an approved typeface that is recognized internally as the typeface to use (of course, you can have different typefaces for different scenarios/uses) - is just as important as determining which one to use.
In other words, marketing budget big and needed to be used up or else shrunk next fiscal cycle.
/s
Realistically, this is probably very cost efficient as it’s a relatively low effort project compared to a fully fledged feature, and something that can make someone say “hey, that y reminds me of GitHub”.
Fine, but are there any good reasons to spend resources on fonts?
Maybe if they spent less resources on achievements, fonts, etc. and more on reliability they could improve their downtime, which seems to be happening more and more often.
The time/investment they spent on their two brand identity typefaces is one side of the equation, the other side is the many, many minutes spent on font decisions in each and every little project in their company if employees are allowed (or maybe even encouraged, it can be quite motivating) to dabble a bit in "visual project identity". And on font licencing awareness training, because those engineer doodles will likely be public. And a Microsoft subsidiary will be an extremely juicy target for licence vultures, github simply can't afford any mistakes in that field.
This is where the wide parameterization of the fonts comes into play: "use one of those two fonts, feel free to go wild with the parameters" is much more likely to actually be followed than "use one of those n fonts, no exceptions", for almost any value of n. And better for morale as well.
You're saying it's better for morale, but do you have any proof? My personal experience says the contrary, the company I work at did the same (albeit it's much smaller than GitHub) and I asked some coworkers what they thought about it, about three of them considered it was a waste of resources, I only remember one saying it was OK, so overall morale went down.
Standards make sense but it how many san serif fonts do we really need as a species? It was someone’s job here to set out and reinvent the wheel, and that seems like a waste of effort to me.
It's pocket change for GitHub but fantastic branding. Lots of people absolutely love JetBrains Mono, for example, even people who don't use their products.. but they won't forget who JetBrains are. If I were in marketing at a tech company with a huge marketing budget, I'd certainly be considering a typeface in the outreach mix.
To save money on licensing existing fonts. Own the font, do what you want with it, independent of whatever license changes a third-party might impose years down the line.
They're different beasts. In fact, they're different enough that they have their own niches. Gnome is the boring, serious, stop-fiddling-and-get-things-done UI. KDE is the quirky uncle that lets you play with his porcelain figures.
These days I prefer KDE, and I accept quirky behavior like losing a widget panel from external monitor 1 if you unplug external monitor 2, because I know it will come back after reboot/relogin. In exchange for that, I get different wallpapers for each monitor, custom/extra panels, several alternatives for application menus and taskbars, lots of widgets, and a few small QOL perks I can't remember right now.
If KDE becomes too quirky, I can always go back to Gnome, but right now I'm happy with KDE.
There's nothing literally stopping someone click "fork" on GitHub. There are tons of things that make it difficult for people to create a successful forked version that people use and contribute to.
Though in this case I guess they could just use Gogs anyway.
The term DAO as it's used now is orthogonal to blockchain and crypto.
It's mostly a term to describe codified governance/community systems. There may be point systems implemented via crypto tokens, but in the vast majority of cases those would be better implemented on centralized ledgers since there's someone with centralized control over either the token issuance, the communication channels, or other essential resources.
If they were developed now, Stack Overflow and Wikipedia would be considered DAOs.
That would just indicate the term 'DAO' doesn't mean anything at all.
I'm unable to find an actual example of anything calling itself a DAO which doesn't also play with cryptocurrency tokens, so I'm pretty sure the term isn't used outside of that fanbase.
Stack Overflow was developed as (and is still operated as) a for-profit/profit-motivated Corporation (Stack Exchange Inc.). No one considers it a DAO in any form, especially not the "autonomous" part. Unless you mean the gamified community elements, but you can't really call that an "organization" either because it's entirely disorganized (and sometimes highly dysfunctional) beyond the game mechanics and more importantly extremely centralized to Stack Overflow's servers.
Wikipedia is run as a not-for-profit Foundation (Wikimedia Foundation). It doesn't seem anything like DAO either. Again, unless you mean the community of contributors to Wikipedia, and that also is extremely centralized to Wikipedia servers and doesn't have anything resembling "autonomous", not even something resembling Stack Overflow's game mechanics.
The "autonomous" in DAO still only means "smart contracts" and no one is using DAO as a term for traditional corporate structures other than those intentionally confused by or wishing to confuse what "autonomous" means in the acronym.
Yes, for users who weren’t scared of .ssh and even edited its contents from time to time. Because these were just their files, not “junk from apps and configs that hurt my sense of beauty”. Those who want to live in a clean folder - mkdir it.
"Short and concise" isn't much praise. If I wanted something short and concise, I could say "is bad", which is probably both more concise and more honest. "Mtime comparison is bad" is more clear and more concise.
"Considered harmful" is misleading because the passive voice suggests some kind of general consensus which usually doesn't exist.
Kate has LSP support for a few years already.