Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd recommend checking out the project link in the comment to which you replied. It is designed _specifically_ to avoid the problem you mention: instead of a fully materialized, fully-nested result, it returns flattened row-oriented results (like a SQL database).

This allows for lazy evaluation i.e. rows are produced only as they are consumed. So if you accidentally write a query that would produce a billion rows but only load 20, the execution of the query only happens for 20 rows + any batching or prefetch optimizations in the adapter used to bind the dataset to the query engine.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: