> Each entry contains a bunch of IPFS address pointers - to the last entry on that "database", to any extra entries you found out about, and to a bunch of previous entries (just to speed up reading)
How bad is the latency to read the entire database state? Even git can become pretty slow fetching and processing objects on large repos, and all of those refs are local after the initial fetch.
Seems like with the additional latency on IPFS, resolving the entire DB state (for snapshotting it or otherwise backing it up, for instance) would be unusably slow for anything even somewhat large, no?
From that description incremental snapshots should be just reading the latest HEADs, and having a full copy of the DB means also having a complete history. The only gap is first time read of the whole db. One way to solve that might look something like git pack files.
In my testing, I saw
OrbitDB load 10-40 entries per second (with 1 byte entry data payload sizes). Each entry tries to point to up to 64 previous entries, which allows a lot parallelism in loading previous entries.
Data size is also an issue which isn’t helped by parallelism.
And your database only ever grows, both in size and in number of entries.
How bad is the latency to read the entire database state? Even git can become pretty slow fetching and processing objects on large repos, and all of those refs are local after the initial fetch.
Seems like with the additional latency on IPFS, resolving the entire DB state (for snapshotting it or otherwise backing it up, for instance) would be unusably slow for anything even somewhat large, no?