I know about a org with ~2-3 devs who decided microservices would be cool. I warned not to go that way because they would surely face delivery and other issues which they wouldn't have when building the solution based on a architecture archetype which could be a better fit for the team and solution, which I evidently decided should be a modular monolith. (the codebase at that point was already a monolith, in fact, but had a large amount of tech debt due to the breakneck speed in which features needed to be released)
They ignored me and went the microservices way.
Guess what?
2 years later the rebuild of the old codebase was done.
3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
Moral of this short story: I can personally say everything this article says is pretty much true.
> 3 years later and they are still fighting delivery and other issues
Having added a fancy new technology and a "successful" project to their resume, they're supposed to move on to the next job before the consequences of their actions are fully obvious.
1 team supporting multiple services is not great, but a monolith with more than 50 developers working on it (no matter how you split your teams) isn't great either.
That's why I don't like the term "microservice", as it suggests each service should be very small. I don't think it's the case.
You can have a distributed system of multiple services of a decent size.
I know "services of a decent size" isn't as catchy as "go for one huge monolith!" or "microservices!" but that's the sensible way to approach things.
> but a monolith with more than 50 developers working on it (no matter how you split your teams) isn't great either.
Why can the game industry etc somehow manage this fine, but the only place where it's actually possible to adapt this kind of artificial separation over the network, it's somehow impossible not do it beyond an even lower number of devs than for a large game? Suggests confirmation bias to me.
The main problem with microservices is that it's preemptive, split whatever you want when it makes sense after-the-fact, but to intentionally split everything up before-the-fact is madness.
Note that the game's industry uses the term 'developer' differently. If a game has X developers, the vast majority of those people are not programmers. Engines also do a lot to empower non-programmers to implement logic in video games, taking lots of the workload off of programmers.
Maybe look up the game credits, I sometimes do and I often see like 10 UI programmers (in games with a ton to 2d UI), 5 gameplay programmers, 5 environment scripting, etc. Sure it is not small amount of people, but it is not an army.
Also those programmers seem to be neatly segregated in different areas of the project which I imagine work similarly to boundaries between the microservices at keeping the logic isolated between teams.
: I do AM surprised just how much QA people are credited in games, QA for major games sure do feel like an army.
How many of those game developers are actually art and asset developers?
How many times have AAA releases been total crap?
How many times have games been delayed by months or years?
How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
How many times have the console manufactures said "Yea, actually you have the option of running a client server architecture with as many services you want?"
> How many times have AAA releases been total crap?
> How many times have games been delayed by months or years?
What are we arguing here? Because I can think of many microservice apps that are crap as well, and have no velocity in development.
> How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
This is entirely irrelevant. We're talking about the trade-offs of separating networked services that could otherwise be one unit. You're saying "why do games have servers then" which is a befuddling question with an obvious answer.
That's like saying my web server is a Microservice because it's not run in my clients browser. It makes no sense.
No. I'm saying there is no real correlation to the quality of microservices and the quality of monorepos in games and the amount of work required to build each one as a quality software object.
Comparing a game to almost any other piece of software, especially web based software, is how you end up with broken abstractions and bad analogies.
The point is that it's clearly not a major issue to have large teams working on the same monolithic codebase, the problems are just solved differently or are just vastly overstated in the first place.
I work on a monolith with ~1500 developers and it works pretty great.
The secret is that you're able to break a monolith apart, just like you can with microservices. You have APIs and modules of the monolith are responsible for their own thing. APIs are your contracts, just like in a microservice architecture.
The difference is that you can check if APIs are broken at compile time. In addition, you can view the API right in your IDE. In addition, your API isn't returning wishy-washy json with a half-assed OpenAPI spec - it's returning real types in a full-featured type system. And, cherry on top - you don't have to communicate over the network. Oh my god, you don't realize how many bugs and thousands of hours are wasted just working around that until you no longer have to. It's an immediate productivity boost.
But the best part is probably deployments. It's just so, so much more straightforward with one codebase.
This is really what it comes down to right here. The real challenge is Conway's Law. Both the software architecture and the org chart need to be designed with Conway's Law in mind. If that hasn't happened then deciding between microservices and monolith is ultimately just deciding how you will be punished for your mistake.
People misunderstanding Conway's Law is a big part of the problem for sure. The law says nothing about team boundaries: it talks about communication pathways.
The paranoid socialist in me thinks big companies like team-sized microservices because it lets them prevent workers from talking to each other without completely ruling out producing running software.
When companies instead encourage forums for communication across team boundaries, it unlocks completely different architectural patterns.
The most common alternative to organizing teams by service boundaries is to organize teams around the business problems to be solved. That is a lot easier to budget for than trying to staff by microservice boundary, doesn't have the coordination and planning overhead, and it means you aren't reliant on up-front planning to get to a functional solution or design.
In high-uncertainty greenfield development, Explore projects or Lean Startup-style experimentation, having developer be close to the users they are serving is very efficient.
It also lets those companies reteam frequently, without needing to change the software to match the new team boundaries, which is very helpful when growing the team.
Part of the problem is that many current programmers came up through functional programming or framework-based development. Microservices are often the first time they encountered modular programming or encapsulation, and so they equate "literally any architecture" with "microservices".
I've worked on monoliths with 400+ developers that were great, but it takes skills that people who have only ever worked in orgs that mandate microservice just don't have.
Functional programming precludes encapsulation, so it doesn't scale indefinitely the way fractal paradigms can. Eventually, the complexity becomes overwhelming.
One effective solution to that is introducing microservices: programmers can still write entirely functional code, but have encapsulation in the form of services. They have to be micro, though, because conventionally-sized services are still big enough to strain the paradigm.
But I see junior engineers who aren't expected to think about the "architecture", by which they mean the modular design. They are handed a spec and they implement it, Mythical Man Month style. That treats organizing lines of code and organizing services as two completely-distinct activities, and depending on the company junior engineers are often not exposed to modular design until five or ten years into their careers.
You’re suffering from a misunderstanding there. Functional programming is all about encapsulation, starting at the individual function boundary (closures can encapsulate state) and then at every layer above that.
Functional languages have some of the most rigorous module systems available. In fact Java adopted such a system recently, showing the weaknesses in its previous support for encapsulation via classes and packages.
I think he means stateless, at some point a system needs to have some internal state to keep things running and that state can be really hard to manage when it is mixed with all other functions in the system that also have state.
Even if that state is kept outside the service itself (like a database or event queue) it can still be really hard to reason about when said state-stores are shared across a huge codebase. Changes to a part of the state can have negative effects in completely unrelated functionality.
And of course, there is nothing blocking a monolith from isolating/modularizing their state-stores, but it tends to not happen unless the architecture forces it to happen or through strong tech leadership.
Folder and file structure and separation of concerns doesn't change the fact that if you have one deployable artifact, it's all sharing the same runtime when deployed. Which means the underlying versions of Java/Go/Python/etc, or core shared libraries, all need to be updated at the same time. All the code is far more coupled than it first seems.
That is not really an issue I've had with Java, but I would absolutely agree that Python is wildly unsuited as a production backend language.
I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though. It's much easier to use an operationalized language that knows backwards compatibility matters.
I was at AWS RDS when they upgraded the shared control plane code from Java 7 to 8. IIRC it was about 6 months for 5-10 developers more or less full-time. Absolutely massive timesink. The move to separate services happened shortly after that.
> I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though.
There's two things going for separate services (which may or may not be separate repos; remember a single repo can have multiple services):
1. You can do it piecemeal. 90% of your services will be 15-minute changes: update versions in a few files, let automated tests run, it's good to go. The 10% that have deeper compatibility issues can be addressed separately without holding back the rest. You can't separate this if you have a single deployable artifact.
2. Complexity is superlinear with respect to lines of code. Upgrading a single 1mLOC service isn't 10x harder than updating ten 100kLOC services, it's more like 20, 30x harder. Obviously this is hard to measure, but there's a reason these massive legacy codebases get stuck on ancient versions of dependencies. (And a reason companies pay out the ass for Oracle's extended Java 8 support, which they still offer.)
Monorepo is orthogonal to services though. You can have a monorepo with multiple services in it.
Even with a monorepo, you will hit a point where you have 1, 10, 100 million lines of e.g. Python, realize you should upgrade from 3.8 to 3.14 because it's EOL, and feel a lot of pain as you have to do a big-bang, all-at-once change, fixing every single breaking change, including from libraries which you also have to update. There's no way around this in current mainstream languages.
We joked about a microservice at a previous job in the same way - the frontend guys didn't want to include a dependency for generating a random ID so the microservice guy decided to build yet another microservice that did nothing but return random IDs.
As long as that team built those microservices to solve whatever problem they're responsible for solving I think it's better to let the problem domain dictate how many you need. Better to have seams that make sense in terms of the surrounding code than to have them in arbitrary places based on the org chart.
The trouble comes when some political wind blows and reshuffles the org chart, and now you're responsible for some services that only made sense in the context of a political reality that no longer exists.
I’m guessing you’re thinking of a certain kind of application (web apps perhaps?), where a monolith can make sense. But that’s but the only kind of application.
We have dozens of service components that are all largely independent of each other - combining them together would be purely a packaging decision, and wouldn’t really simplify much. In some cases, it wouldn’t make sense or even be possible at all.
An example is our execution agent, which executes customer workflows - that’s completely independent both conceptually and from a security perspective. Each agent instance executes a single flow at a time, for resource consumption and security reasons, which entails an ecosystem of services to manage that - messaging, data ingestion at scale (100K flows per day, multi-petabyte “hot” datastore for active data), orchestration, and other supporting services such as data access and network routing.
All of our teams support multiple services, and many of them qualify as microservices.
Yep, I remember working at a place where one team churned out a large majority of the microservices and the other three teams just kept on doing their thing.
The microservice team were especially terrible because the did all the initial work and basked in the "glory", but when it came to maintaining the services, they wanted nothing to do with it.
At some point it just doesn't scale for the team that owns all the micro services. It's poor organizational decision making to have this setup.
Sure a monolith that does 1000 things might not be ideal, but 100 repos for 100 micro services owned by a team of 5-10 devs is unmanageable on the other end of the spectrum. Oh and everyone forgets the orchestration layer.
The best use case is promotion! Welcome to big tech, where all the teams get reshuffled every few months and every microservice exists because some dev needed a promotion. The greater the ratio of microservices to devs, the better your manager looks! (Dev work-life balance be damned, we pay you to ruin your life.)
I mean, "GREAT" until you need to do any kind of refactoring, or the company grows, or shrinks, or reorgs, or you have a feature that needs to change more than one service.
The "one team per microservice" makes code-enclosure style code ownership possible, but it is the least efficient way I have ever seen software written.
I've long wanted to hack an IDE so people are only allowed to change the Java objects they created, and then put six Java programmers in a room and make them write an application, yelling back and forth across the room. "CAN YOU ADD A VERSION OF THAT METHOD THAT ACCEPTS THIS NEW CLASS?" "SURE THING! TRY THAT?"
People discount the costs of microservices because they makes management's job easier, especially when companies have adopted global promotion processes. But unless they are solving a real technical constriant, they are a shitty way to work as an engineer.
I suspect a lot of the issues teams encounter with microservices stem from a lack of cohesive understanding of microservices.
If people on the team continue to think about the "system" as a monolith (what they already know and are comfortable with), you'll hit friction ever step of the way from design all the way out to deployment. Microservices throw out a lot of traditional assumptions and designs, which can be hard for people to subscribe to.
I think there has to be adequate "buy-in" throughout the org for it to be successful. Turning an existing mono into microservices is very likely to meet lots of internal resistance as people have varying levels of being "with it", so-to-speak.
> 2 years later the rebuild of the old codebase was done.
>
> 3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
I had a similar experience setting up the Infra for an 8-12 microservice application. The project had been dragging and no one really understood what they were doing. When I started asking scale questions, the answer came back that this was an internal admin UI for 9-5 business that would have 5-10 users.
One place I worked at got sold on microservices by Thoughtworks, along with a change to Java as the main language to be used.
As one would expect, they made bank from their consulting endeavor and rode off into the sunset while the rest of us wasted several years of our careers rewriting ugly but functional monolithic code into distributed Java based microservices. We could have been working on features and product but essentially were justifying a grift, adding new and novel bugs as we rebuilt stable APIs from scratch.
The company went under not long after the project was abandoned. Nobody, of course, would be held to account for it. I will no longer touch a tech consultancy like TW with a 10 foot barge pole.
They ignored me and went the microservices way.
Guess what?
2 years later the rebuild of the old codebase was done.
3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
Moral of this short story: I can personally say everything this article says is pretty much true.