> but a monolith with more than 50 developers working on it (no matter how you split your teams) isn't great either.
Why can the game industry etc somehow manage this fine, but the only place where it's actually possible to adapt this kind of artificial separation over the network, it's somehow impossible not do it beyond an even lower number of devs than for a large game? Suggests confirmation bias to me.
The main problem with microservices is that it's preemptive, split whatever you want when it makes sense after-the-fact, but to intentionally split everything up before-the-fact is madness.
Note that the game's industry uses the term 'developer' differently. If a game has X developers, the vast majority of those people are not programmers. Engines also do a lot to empower non-programmers to implement logic in video games, taking lots of the workload off of programmers.
Maybe look up the game credits, I sometimes do and I often see like 10 UI programmers (in games with a ton to 2d UI), 5 gameplay programmers, 5 environment scripting, etc. Sure it is not small amount of people, but it is not an army.
Also those programmers seem to be neatly segregated in different areas of the project which I imagine work similarly to boundaries between the microservices at keeping the logic isolated between teams.
: I do AM surprised just how much QA people are credited in games, QA for major games sure do feel like an army.
How many of those game developers are actually art and asset developers?
How many times have AAA releases been total crap?
How many times have games been delayed by months or years?
How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
How many times have the console manufactures said "Yea, actually you have the option of running a client server architecture with as many services you want?"
> How many times have AAA releases been total crap?
> How many times have games been delayed by months or years?
What are we arguing here? Because I can think of many microservice apps that are crap as well, and have no velocity in development.
> How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
This is entirely irrelevant. We're talking about the trade-offs of separating networked services that could otherwise be one unit. You're saying "why do games have servers then" which is a befuddling question with an obvious answer.
That's like saying my web server is a Microservice because it's not run in my clients browser. It makes no sense.
No. I'm saying there is no real correlation to the quality of microservices and the quality of monorepos in games and the amount of work required to build each one as a quality software object.
Comparing a game to almost any other piece of software, especially web based software, is how you end up with broken abstractions and bad analogies.
The point is that it's clearly not a major issue to have large teams working on the same monolithic codebase, the problems are just solved differently or are just vastly overstated in the first place.
Why can the game industry etc somehow manage this fine, but the only place where it's actually possible to adapt this kind of artificial separation over the network, it's somehow impossible not do it beyond an even lower number of devs than for a large game? Suggests confirmation bias to me.
The main problem with microservices is that it's preemptive, split whatever you want when it makes sense after-the-fact, but to intentionally split everything up before-the-fact is madness.