Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like this ontology; we've divided our events into roughly similar categories but these are better names than we use.

I'd further advance that

- Trigger events and RESTful events are roughly the same thing, just a question of what you choose in latency vs. size vs. schema flexibility space. We even have events in our system whose schema inlines data below a certain size but links above that.

- There is a fourth type: windowed reductions over domain events, e.g. the "changelog" in Kafka Streams. This bears a similar relation to domain events as the RESTful event does to trigger events.




If I understand your forth type correctly, it's like snapshotting in an event-sourced system. For me those are only internal and not meant to be communicated, rather they are there to speed up reconstitution of an event-sourced aggregate. I might be misunderstanding your point tho.

With regard to your first point, in terms of communication, they serve very similar integration patterns. I imagine you've standardised the consumption of these events, making it transparent if the event is linked or attached. Is that correct? In such cases, there's one difference, which is the out of sync state; the situation where the state has been altered after the signal was dispatched but before it was consumed. Is that something you deal with?


> I imagine you've standardised the consumption of these events, making it transparent if the event is linked or attached. Is that correct?

Yes, a pretty thin layer that returns the data if it was attached to the message or makes an HTTP request if not.

> the situation where the state has been altered after the signal was dispatched but before it was consumed. Is that something you deal with?

Our most common use case is trying to keep local caches of e.g. control plane data up-to-date, so retrieving something more recent than existed when the event was produced is usually a bonus. In the rare cases it is a concern we make sure the external data link is unique (e.g. into an S3 bucket with an expiration policy a bit longer than the event retention time).

> For me those are only internal and not meant to be communicated, rather they are there to speed up reconstitution of an event-sourced aggregate. I might be misunderstanding your point tho.

I think you've got the point. As these are implemented in Kafka Streams they are also an event source themselves which can be consumed like any other topic's messages.


I've worked on a real-time dashboard for operational insights for an airport once where they initially used eventstore to do effectively the same thing. We ended up replacing eventstore with a similar setup using typescript and kafka, instead of storing the projections into a new topic we used a mysql and redis as a storage backend. Worked quite well, especially in the cases where there was some more interaction needed with external systems. I haven't worked with Kafka Streams yet, but sure sounds like a worthwhile investment to spend some time with.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: