Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depends heavily on the use case. I know multiple video related companies that chunked data for replays into 1s and 500ms segments.

Having many keys, means that you can preform thousands of asynchronous requests for small pieces of data and then piece it together on the client side.

Super low latency is just something else to optimize for.



But then you need to push these segments into partitions, and big partitions are really bad, especially for old versions of Cassandra… Although I met customers with partitions of size of 100Gbs…


As another commenter said, 500ms clip is super tiny. Each key is a a separate partition.

So each segment is a separate partition.


chunks will be evenly distributed between many partitions, no need to store chunks in one partition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: