I think the most interesting part of this project is the fault tolerance. I can’t say I’ve seen any other projects do this, but it seems reasonable to want checkpointing during a long computation.
Another thing I like is that conceptually it seems like it would be simple to switch the underlying query engine (right now it’s Polars) in the future. Seems like a pretty general distributed system.
Checkpointing is actually a bit difficult for fault tolerance for long running batch computations and state tends to grow linearly as the computation progresses, unlike streaming applications where you typically use windows. This means periodic checkpointing leads to quadratic amount of writes to durable storage which is pretty bad.
Quokka supports checkpointing but does not enable that by default to prevent this common problem from killing normal operation performance.
Fault tolerance is an important feature but, given Python’s popularity in data science, the most interesting part of this project to me is support for Python UDFs.
In principle, the programming language should not be the greatest consideration because developers can learn and use different languages for different applications. In practice, being able to draw on familiar syntax and libraries can make a real difference in usability.
Even the self driving in Tesla use Python. If your Python is slow, you are using Python wrong. So it’s kind of a good thing that it’s natively slow, because you are not supposed to do your computations with native Python. It it was faster (added a lot of type hints and build processes), you would probably not be nudged in the right direction.
I don't have very much background in ML or distributed systems, so forgive my naive questions...
> After all, most ML in industry today seems to be lightweight models applied to heavily engineered features
I assume "lightweight models" are those that don't have too many parameters, and "heavily engineered features" mean that the data fed into the model has undergone significant pre-processing via potentially complicated UDFs -- hence the motivation for the project. Is that right?
> Quokka is an open-source push-based vectorized query engine ... it is meant to be much more performant than blocking-shuffle based alternatives like SparkSQL
Does anyone have pointers to what push-based vs blocking-shuffle engines are? Any good papers?
> It should work on local machine no problem (and should be a lot faster than Pandas!)
So I understand why Quokka is faster than Spark, but I'm a bit uncertain as to why the author is also making a comparison with Pandas on a single machine. Is it because the streaming pipeline design means that Quokka can better take advantage of multiple cores?
That's right. My background is mostly in quantitative finance, where we would use models like linear regression on expert-engineered features based on market data, instead of throwing a deep neural network at raw price data like what some people might imagine.
On single machine, you really should just use Polars. Quokka is faster than Pandas because it can take advantage of multiple cores, but so can Polars -- and it is likely to be faster.
Sorry if I missed it -- Are there plans to offer a way to query this in actual sql? I believe SingleStore is MySQL compatible for example which I think is a nice feature. Basically I want to be able to interact with this much like I'd interact with another database I'm using or perhaps with a sqlalchemy core integration (which both SingleStore and Snowflake have).
Firstly quokka is pure Python, which I believe is good for interoperability with Python based UDFs.
Secondly quokka tries to be fault tolerant. I.e. it can handle worker failures intra query and not have to start over. This is quite important in real world spark deployments with thousands of nodes running many hours. AFAIK this is not well supported by most spark alternatives.
Finally quokka has a much stronger focus on time series data analytics. It is meant to excel at workloads like range joins and asof/PIT joins used for feature engineering. (This part isn't too stable yet so is not open source)
This means quokka optimizes on a different point in the UDF/performance/fault tolerance tradeoff space than something like arrow ballista or starrocks, which I think are pure performance plays
I really do think a distributed db with compute/storage separation and optimized for feature engineering/dataloading (for training NNs) is underserved.
I'd be very interested in the time series aspects of what you're building.
I have a SQL Engine in Python too (https://github.com/mabel-dev/opteryx). I focused my initial effort on supporting SQL statements and making the usage feel like a database - that probably reflects the problem I had in front of me when I set out - only handling handfuls of gigabytes in a batch environment for ETLs with a group of new-to-data-engineering engineers. Have recently started looking more at real-time performance, such as distributing work. Am interesting in how you've approached.
It uses https://github.com/sqlparser-rs/sqlparser-rs as the parser and lexer. The binder, planner, optimizer and executor are in Python. The optimizer stage only works on the logical plan and the rules are heuristic only.
I have. I am about 2x faster than trino with fault tolerance. But I didn't put the numbers on that plot because this trino feature is still really new and I might not be benchmarking it in the best way.
One significant disadvantage of PySpark is its reliance on py4j to serialize and deserialize objects between Java and Python when using Python UDFs. This constant overhead can become burdensome as data volume increases in such an exchange. However, I am glad to see efforts to create a data pipeline framework using Python and Ray.
~One suggestion, a Scala/Java Spark run of those benchmarks should be a valid baseline to compare against as well instead of PySpark.~
Ah it's SparkSQL so the execution probably wouldn't have much of py4j involvement, except for the collect.
There is also pandas udfs, which uses arrow as the exchange format. I assume it still has to copy the data (?), but it makes the (de)serializarion fast, and allows for vectorized operations.
Most likely an autodidactic exercise. I'd love to see the source, even if, or especially if, it's half-finished. Seeing someone work their way through the snarls in such a project would be useful.
That's like saying "a library to parse and optimize computer programs", except probably even harder, since a compiler and runtime library can't make any assumptions about the programs they need to make run, so they're limited in the potential of utilizing all that context information.
Countless person-years have been spent on this and it's still a very active fields of research and engineering.
> 2x SparkSQL performance
Ah, ok, so it can be slow. Never mind then, carry on :-P
I'm the author of SQLGlot. Optimizing SQL is quite common and means things like projection / predicate push downs and logical simplification. SQL can be optimized relatively easily compared to a programming language because it is declarative. For example
Oh, you mean optimizing the SQL as SQL, not optimizing the query execution?
Well, some of these transformations are useful, like the one you presented. But any non-trivial transformation may be either beneficial or detrimental, depending on a myriad of factors including memory layout, compression, distribution of data, computing hardware etc.
Does this only optimize things like `select * from table where false` type stuff or can it optimize access paths as well? such as whether an index can be used? and what ranges? or should a sequential scan be used
> Very fast kernels for SQL primitives like joins, filtering and aggregations. Quokka uses Polars to implement these. (I sponsor Polars on Github and you should too.) I am also exploring DuckDB, but I have found Polars to be faster so far.
PyArrow is also a CPython extension wrapping a C++ library.
I agree that it’s a little disingenuous to call it “pure Python” when the two libraries doing the heavy lifting are non-Python; but it’s not a lie that the entirety of the Quokka-specific codebase is Python.
Personally, what I would be more interested in (and what I thought this would be from the title) is a full SQL engine wholesale coded in Python, a la SQLite. Even if it wasn’t super performant or functional.
> Personally, what I would be more interested in (and what I thought this would be from the title) is a full SQL engine wholesale coded in Python, a la SQLite. Even if it wasn’t super performant or functional.
Am I incorrect? Elsewhere in the thread you say this project is pure Python and then in the post you say you use a Rust library for queries. Am I misunderstanding something?
Technically it’s worded that “[they] only wrote python”, which is true, in a literal sense.
I agree that saying “only” python gives an inaccurate implication, but it seems non-malicious and strictly correct. I don’t think they are trying to be misleading.
When people say "pure X", to me, it normally means they didn't involve an FFI or external compiler. This is an often beneficial thing (though also often at the expense of performance and maybe correctness) since it simplifies your build process.
For example, here [0] is a "pure Python postgres driver" and the implication is that it doesn't use libpg.
Or see also this discussion [1].
I don't mean that the author should or shouldn't do whatever they want. I just wouldn't use the term "pure" here or even say this is an engine written in Python since a large chunk of it is not written in Python. That doesn't mean this is not a cool project. It's a cool project!
"Pure Python" ultimately means that it all runs within the python interpreter, with no dependencies on code that runs outside of the interpreter (standard library notwithstanding).
I wrote a toy distributed SQL, cypher graph, dynamodb style and document storage Python database but it's more for experimentation than serious use. It's not ready for use it's more a show of how little code you can use to write a database.
Always wanted to have something like the crafting interpreters style book but implementing a database from scratch, and bonus points for a distributed system with a leader/replica model, leader election and things like partitioning, resizing partitions, handling node failure.
Something like building a toy dynamodb variant. Would pay good money for this.
Is one thing to read about and guesstimate implementation choices from white paper. Totally another if databases/distributed systems expert walks you through propagating write ahead logs to replicas. Hehe
And what it means to have eventual consistent writes vs strong writes in practice. A way to teach by doing.
I prefer reading a description of an algorithm than the code of the algorithm. From the description I can work out how to implement the code myself. I might look at someone's code for ideas or compare how I solved a problem.
I too would enjoy a whitepaper of database design. I document all whitepapers read on GitHub profile (see my HN profile)
Farley Knight added an AVL tree to the repository that I need to change the unbalanced tree to use it.
I would like to know how SQLite VM works so I can write a database VM. I learned some things from crafting interpreters book and started writing my own programming language here
I haven't looked into this in detail, and it seems like a fine project at a glance, but this caught my attention from the introduction:
> When I set out, I had several objectives:
> Easy to install and run, especially for distributed deployments.
> [...]
> The first two objectives strongly scream Python as the language of choice for Quokka.
Python is probably one of the last languages I'd consider if ease of deployment is a priority. Packaging has historically been a mess, and deploying standalone binaries across platforms is a pain. State of the art solutions are 3rd party and involve bundling the intepreter for each platform. It's been a few years since I last used it for anything serious, but I believe this is still the case.
Whereas something like Go actually makes this infinitely easier, for both the developer and the user. One native Go command builds a standalone binary for each platform. It couldn't be simpler.
The other objective of supporting Python UDFs necessarily ties you to Python. And since this is solving a data science problem, it makes sense for it to be written in Python.
Yeah it's just that the UDFs I see people need require Python packages like sklearn and PyTorch that really don't have good alternatives in other libraries.
> Having lost all the money I made from my startup on shitcoins and the stock market, I returned to my PhD program to build a better distributed query engine, Quokka
Another thing I like is that conceptually it seems like it would be simple to switch the underlying query engine (right now it’s Polars) in the future. Seems like a pretty general distributed system.