I tried developing a syntax like this: a table is set as context, then you do some transformations in pipeline fashion. That seems promising, but trying to rewrite some of my reallife examples, I found out that pipelines are much shorter than I expected. The reason was that you take a couple of tables, then make 3-4 of them (by filtering and grouping), then mix them together. And this does not fit well into pipeline structure, even with heavy nesting. In the end, the most gain on real examples (Python & Pandas) was where I made a function-like operation, which can be as well written in Python.
I can share some sketches if you wish (maybe it can give some ideas).
But generally, I think the code could be rather structured around calculation graph, than a linear sequence of operations.
I've also tried to rewrite some of my challenging queries [0] in my hypothetical syntax and while I think your observation about pipeline length is correct, the result still came out much better than SQL. Frankly, even in F#, most of my pipelines are around 5 functions too. In my view, pipelines are just a convenient mental model. I'd love to see your sketches, here are my (very WIP) concepts: [1]
I don't quite see how a computation graph would work as the core of a textual language and I'm sceptical about using it with existing DBMSs (I'd like to actually use my creation one day :) ), but I'm open to ideas.
I can share some sketches if you wish (maybe it can give some ideas).
But generally, I think the code could be rather structured around calculation graph, than a linear sequence of operations.