Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, we have 300m+ rows in a table as well. It's partitioned by time and chugs along with no issues. Granted It's a 30 vcpu, 100gb ram machine, but it hosts billions of rows in aggregate


Last app I worked on had a few tables in the billions of rows. Seemed to work fine as we were only really accessing it by unique keys which seems to remain fast no matter how large the table is.


> we were only really accessing it by unique keys which seems to remain fast no matter how large the table is.

Even a naive B-tree index has a logarithmic curve, which means that the time to find a record asymptotically flattens out as the number of records increases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: