Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not specifically about event-driven, but the most damaging anti-pattern I would say is microservices.

In pretty much all projects I worked with in recent years, people chop up the functionality into small separate services and have the events be serialised, sent over the network and deserialised on the other side.

This typically causes enormous waste of efficiency and consequently causes applications to be much more complex than they need to be.

I have many times worked with apps which occupied huge server farms when in reality the business logic would be fine to run on a single node if just structured correctly.

Add to that the amount of technology developers need to learn when they join the project or the amount of complexity they have to grasp to be able to be productive. Or the overhead of introducing a change to a complex project.

And the funniest of all, people spending significant portion of the project resources trying to improve the performance of a collection of slow nanoservices without ever realising that the main culprit is that the event processing spends 99.9% of the time being serialised, deserialised, in various buffers or somewhere in transit which could be easily avoided if the communication was a simple function call.

Now, I am not saying microservices is a useless pattern. But it is so abused that it might just as well be. I think most projects would be happier if the people simply never heard about the concept of microservices and instead spent some time trying to figure how to build a correctly modularised monolithic application first, before they needed to find something more complex.



Also, the single most nonsensical reason that people give for doing microservices is that "it allows you to scale parts of the application separately". Why the fuck do you need to do that? Do you scale every API endpoint separately based on the load that it gets? No, of course not. You scale until the hot parts have manageable load and the cold parts will just tag along at no cost. The only time this argument makes sense is if one part is a stateless application and the other part is a database or cache cluster.

Microservices make sense when there are very strong organizational boundaries between the parts (you'd have to reinterview to move from one team to the other), or if there are technical reasons why two parts of the code cannot share the same runtime environment (such as being written in different languages), and a few other less common reasons.


Oh, it is even worse.

The MAIN reason for microservices was that you could have multiple teams work on their services independently from each other. Because coordinating work of multiple teams on a single huge monolithic application is a very complex problem and has a lot of overhead.

But, in many companies the development of microservices/agile teams is actually synchronised between multiple teams. They would typically have common release schedule, want to deliver larger features across multitude of services all at the same time, etc.

Effectively making the task way more complex than it would be with a monolithic application


I've worked with thousands of other employees on a single monolithic codebase, which was delivered continuously. There was no complex overhead.

The process went something like this:

1. write code

2. get code review from my team (and/or the team whose code I was touching)

3. address feedback

4. on sign-off, merge and release code to production

5. monitor logs/alerts for increase in errors

In reality, even with thousands of developers, you don't have thousands of merges per day, it was more like 30-50 PRs being merged per day and on a multi-million line codebase, most PR's were never anywhere near each other.


Regarding monoliths...when there's an issue, now everyone who made a PR is subject to forensics to try to identify cause. I rather make a separate app that is infrequently changed, resulting in less faults and shorter investigations. Being on the hook to try to figure out when someone breaks "related" to my team's code, is also a waste of developer time. There is a middle ground for optimizing developer time, but putting everything in the same app is absurd, regardless of how much money it makes.


I'm not sure how you think microservices gets around that (it doesn't!).

We didn't play a blame game though... your team was responsible for your slice of the world and that was it. Anyone could open a PR to your code and you could open a PR to anyone else's code. It was a pretty rare event unless you were working pretty deep in the stack (aka, merging framework upgrades from open source) or needing new API's in someone else's stuff.


> I'm not sure how you think microservices gets around that (it doesn't!).

Microservices get around potential dependency bugs, because of the isolation. Now there's an API orchestration between the services. That can be a point of failure. This is why you want BDD testing for APIs, to provide a higher confidence.

The tradeoff isn't complicated. Slightly more work up front for less maintenance long term; granted this approach doesn't scale forever. There's not any science behind finding the tipping point.


> Microservices get around potential dependency bugs, because of the isolation.

How so? I'd buy that bridge if you could deliver, but you can't. Isolation doesn't protect you from dependency bugs and doesn't protect your dependents from your own bugs. If you start returning "payment successful" when it isn't; lots of people are going to get mad -- whether there is isolation or not.

> Now there's an API orchestration between the services

An API is simply an interface -- whether that is over a socket or in-memory, you don't need a microservice to provide an API.

> This is why you want BDD testing for APIs, to provide a higher confidence.

Testing is possible in all kinds of software architectures, but we didn't need testing just to make sure an API was followed. If you broke the API contract in the monolith, it simply didn't compile. No testing required.

> Slightly more work up front for less maintenance long term

I'm actually not sure which one you are pointing at here... I've worked with both pretty extensively in large projects and I would say the monolith was significantly LESS maintenance for a 20 year old project. The microservice architectures I've worked on have been a bit younger (5-10 years old) but require significantly more work just to keep the lights on, so maybe they hadn't hit that tipping point you refer to, yet.


50 PRs with a thousand developers is definitely not healthy situation.

It means any developer merges their work very, very rarely (20 days = 4 weeks on average...) and that in my experience means either low productivity (they just produce little) or huge PRs that have lots of conflicts and are PITA to review.


Heh, PRs were actually quite small (from what I saw), and many teams worked on their own repos and then grafted them into the main repo (via subtrees and automated commits). My team worked in the main repo, mostly on framework-ish code. I also remember quite a bit of other automated commits as well (mostly built caches for things that needed to be served sub-ms but changed very infrequently).

And yes, spending two-to-three weeks on getting 200 lines of code absolutely soul-crushingly perfect, sounds about right for that place but that has nothing to do with it being a monolith.


> Also, the single most nonsensical reason that people give for doing microservices is that "it allows you to scale parts of the application separately". Why the fuck do you need to do that? Do you scale every API endpoint separately based on the load that it gets? No, of course not. You scale until the hot parts have manageable load and the cold parts will just tag along at no cost. The only time this argument makes sense is if one part is a stateless application and the other part is a database or cache cluster.

I think it really matters what sort of application you are building. I do exactly this with my search engine.

If it was a monolith it would take about 10 minutes to cold-start, and it would consume far too much RAM to run a hot stand-by. This makes deploying changes pretty rough.

So the index is partitioned into partitions, each with about a minute start time. Thus, to be able to upgrade the application without long outages, I upgrade one index partition at a time. With 9 partitions, that's a rolling 10%-ish service outage.

The rest of the system is another couple of services that can also restart independently, these have a memory footprint less than 100MB, and have hot standbys.

This wouldn't make much sense for a CRUD app, but in my case I'm loading a ~100GB state into RAM.


> Why the fuck do you need to do that?

Because deploying the whole monolith takes a long time. There are ways to mitigate this, but in $currentjob we have a LARGE part of the monolith that is implemented as a library; so whenever we make changes to it, we have to deploy the entire thing.

If it were a service (which we are moving to), it would be able to be deployed independently, and much, much quicker.

There are other solutions to the problem, but "µs are bad, herr derr" is just trope at this point. Like anything, they're a tool, and can be used well or badly.


Yes. There are costs to having monoliths. There are also costs to having microservices.

My hypothesis is that in most projects, the problems with monoliths are smaller, better understood and easier to address than the problems with microservices.

There are truly valid cases for microservices. The reality is, however, that most projects are not large enough to qualify to benefit from microservices. They are only large projects because they made a bunch of stupid performance and efficiency mistakes and now they need all this hardware to be able to provide services.

As to your statement that deploying monoliths takes time... that's not really that big of a problem. See, most projects can be engineered to build and deploy quickly. It takes truly large amount of code to make that real challenge.

And you still can use devops tools and best practices to manage monolithic applicaitons and deploy them quickly. The only thing that gets large is the compilation process itself and the size of the binary that is being transferred.

But in my experience it is not out of ordinary for a small microservice functionality that has just couple lines of code to produce image that take gigabytes in space and takes minutes to compile and deliver, so I think the argument is pretty moot.


Also - you give up type safety and refactoring. LoL


Well, technically, you can construct the microservices preserving type safety. You can have an interface with two implementations

- on the service provider, the implementation provides the actual functionality,

- on the client, the implementation of the interface is just a stub connecting to the actual service provider.

Thus you can sort of provide separation of services as an implementation detail.

However in practice very few projects elect to do this.


Even with this setup in place you need a heightened level of caution relative to a monolith. In a monolith I can refactor function signatures however I desire because the whole service is an atomically deployed unit. Once you have two independently deployed components that goes out the window and you now need to be a lot more mindful when introducing breaking changes to an endpoint’s types


You don't have to. The producers of the microservice also produces an adapter. The adapter looks like a regular local service, but it implements the code as a REST request to another microservice. This was you get you type-safety. Generally you structure the code as

Proj:

|-proj-api

|-proj-client

|-proj-service

Both proj-client and proj-service consume/depend-on proj-api so they are in sync of what is going on.

Now, you can switch the implementation of the service to gRPC if you wanted with full source compatibility. Or move it locally.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: