Hacker News new | past | comments | ask | show | jobs | submit login

It can increase latency (which can be somewhat mitigated though by having a write buffer e.g. on EBS volumes), but it substantially _reduces_ cost: all cross-AZ traffic (which is $$$) is handled by the object storage layer, where it doesn't get charged. This architecture has been tremendously popular recently, championed by Warpstream and also available by Confluent (Freight clusters), AutoMQ, BufStream, etc. The KIP mentioned in the post aims at bringing this back into the upstream open-source Kafka project.





So it's cheaper *on AWS*. Any cloud provider where cross-AZ traffic is not $$$, I can't imagine this architecture being cheaper.

Engineering solutions which only exist because AWS pricing is whack are...well, certainly a choice.

I can also think of lots of cases where whatever you're running is fine to just run in a single AZ since it's not critical.


The other clouds have fees like this too.

Even if this were to change, using object storage results in a lot of operational simplicity as well compared to managing a bunch of disks. You can easily and quickly scale to zero or scale up to handle bursts in traffic.

An architecture like this also makes it possible to achieve a truly active-active multi-region Kafka cluster that has real SLAs.

See: https://buf.build/blog/bufstream-multi-region

(disclosure: I work at Buf)


I think Azure eliminated their inter-availability zone data transfer fees last year. https://azure.microsoft.com/en-us/updates?id=update-on-inter...



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: