Outside of web development, sometimes you need special hardware or workspaces configured to do development. Usually this can be done in a container, but that comes with its own annoyances. Having a central development server for large compiled code bases is really useful.
Not true. Worked on a real distributed payment system. Every service came with a mock clone or a single server mode to standup all services in the same box. Every developer had a powerful personal desktop (or two). I loved it. Everyone I knew there loved it.
See comment below. Just because it wasn't true for you, doesn't mean it's not true. I'm not talking about mocking services to run it; I'm talking about the development environment and sharing with a team of 20 or so on a code base that takes 45 minutes to compile without parallelizing the build. You also cannot mock GPU functions. There's simply no cuda emulator.
This is an ideal case and I wish it's like this for everybody. At least one of the payment providers I'm working with in a customer's project doesn't even have a sandbox. We must test with real money. Obviously we mock everything in unit and integration tests, but to know that it really works the customer must put money on the manual testing account.
Those are all valid trade offs, but its ignoring the issue raised in the AP.
Like, is an environment like this encouraging bad habbits?
But anyway..
Special hardware: such as? Very few systems can't be downscaled. Situations where you need this special hardware we are talking horizontal multi server setups anyway.
Special workspaces: be less special, improve tooling, improve build operations. Very few setups actually needs centralised configuration.
Large compilations: I'm not sold that a) there is many people with the justifiable need, and b) any real justifiable need is likely going to want on demand autoscaling of the compilation servers, i.e. development won't be local anyway.
I'm not trying to debunk everything you've said. It's a trade off and I've used central dev databases in the past for legacy systems and it worked well for those in the office.
All I can tell you is that iteration speed, testability of code, manual testing and all around team morale was DRASTICALLY improved by stubbing out that bottleneck in newer systems.
I'm speaking for my individual case and others I work with where we have codebases of over a million lines of c++, with many header-only libraries. On an 80-core server make -j can still take 3-4 minutes, and that uses all the resources on the machine. Trust me, I wish I could have something as fast that's not centralized. The closest I can think of is either a VM (slow/wasteful), or a container. The container would be really easy if everyone used vscode with the container development plugin, but not everyone on the team does.
For those that don't, it's much more friction to remember to start it up, etc.