Rama codifies and integrates the concepts I described in my book, with the high level model being: indexes = function(data) and query = function(indexes). These correspond to "depots" (data) , "ETLs" (functions), "PStates" (indexes), and "queries" (functions).
Rama is not batch-based. That is, PStates are not materialized by recomputing from scratch. They're incrementally updated either with stream or microbatch processing. But PStates can be recomputed from the source data on depots if needed.
Forgive me if I’m misunderstanding things, but this seems quite similar to what Materialize and ReadySet do, but like “as a library”, because Rama doesn’t use a “separate” layer for the storage stuff. Is that correct-ish?
Rama is not batch-based. That is, PStates are not materialized by recomputing from scratch. They're incrementally updated either with stream or microbatch processing. But PStates can be recomputed from the source data on depots if needed.