As long as we never add new features, never need to change how we map UI <-> Postgres DDL, and our users never make any mistakes when they change their tables, it could work without being a complexity nightmare
Curious - so how do you manage client-specific schemas then? Do you just have mappings in postgres (column1, column2, column3, etc.) or maybe store a client specific schema in bson per client?
It's all JSON in two Postgres tables: `collection` which represents a Notion Database and has a `collection.schema` JSONB column, and `block` which has a `block.properties` JSONB column that stores the property values - the stuff in the Notion Database cells - for each row. We apply "schema on read" when querying or rendering a Notion Database, and we have a service on the backend that builds indexes/caches for hot collections on the fly. The service handles all the queries for collections larger than X rows. For smaller collections, we just give the client the whole thing modulo permissions and it does the query locally.
As long as we never add new features, never need to change how we map UI <-> Postgres DDL, and our users never make any mistakes when they change their tables, it could work without being a complexity nightmare