Hacker News new | past | comments | ask | show | jobs | submit login

I get that - but the usual implementation would be to have a set of databases with indexes (maybe mysql or mongodb) where you could store all the data and run ad-hoc queries against. You'd then put memcache in front of that for fast access to repeated queries where you already know which data you want. If the data isn't in the memcache, it would fall through to the underlying DB that is already on disk.

zBase would have it's own full copy of the data already on distributed disks, so it wouldn't need to fall through to some other database. That seems to be the entire point there - but surely you'd still need to store the data in some place you could run ad-hoc queries on it? That means that the data is duplicated into two places that would need to be kept up to date in sync. If a transaction fails on one of the data stores, don't you have inconsistent data now?




Currently zBase does not have any capabilities for indexing. But, the inherent design enables to use incremental replication protocol to build things outside of zBase to do indexing.

zBase is used as highly available key-value store for writes and reads. It offers few fancy operations like get-lock as well.


If your workload is light enough that a set of "full" databases is a cost effective solution, then a distributed KV store is not what you need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: