Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The presented idea shortens the given examples, but is not composable. What happens if you have 1:N instead of 1:1 relation? Or even a N:M relation. Where do you specify whether you want an innner / outer / left / right join? This proposal works for some simple queries but fails to capture the generality of the relational model.

So, from a language development point of view, one has to ask: Is this special case worth the extra syntactical sugar? It has the downside, that when a query evolves and falls out of the special case you have to reformulate the syntactical sugar yourself. This creates friction, and it is still necessary to understand the relational model.

Other commenters mentioned Neo4j as an example where similar ideas have been implemented. From my limited experience with Neo4j, I'd say it makes a lot of sense there, because graph queries will often fall into the sub class of queries, that can benefit from the syntactical sugar.

All in all, I would not call this a simplification. Syntactical sugar never is a simplification. It is an "easification". It makes certain examples easy and hides what is going on, without really abstracting it away.



They explain 1:N relations in the article, so I won't repeat that here. A N:M relation will be presented the same way as in all other relational databases, it requires a separate table with rows containing both keys.

And if I'm honest I think this captures more of the relational model than SQL does, you might be confusing the two. This synctactic sugar is actually using the properties relations have, rather than SQL which allows you to compute all kinds of things whether they make sense or not.

Note that they talk about using foreign keys for this purpose. Really what this is doing is turning the relation formed by this column and the primary key into a function (see [1] for an article I wrote on the subject), which allows for some nicer syntax because functions are nicer than general relations. This means most of the problems you mention can be resolved through simply enforcing that constraint. In a sense this moves the problem, but it does mean you can't accidentally invalidate the query. And frankly having some syntactic sugar for foreign keys in SQL is a feature that's long overdue.

The main downside is the inability to do anything other than equijoins, and the inability to specify new relations on the fly. The latter is a bit of a problem, but not insurmountable. I can't figure out a reasonable way to do anything other than equijoins, but that might be for the best.

Also, ironically what I'm really missing is how to do an actual join, it's nice to explicitly specify functions but if you've got two foreign keys to the same table (functions with the same codomain) how do you calculate the join (pullback)?

[1]: https://pragmathics.nl/2023/10/24/putting-the-relational-bac...


> All in all, I would not call this a simplification.

It is a simplification, as it gives you less that you need to understand and decode. Working with a query with ten to twenty joins and join conditions, you have to juggle a lot of intermixed concerns and you probably have to read a lot of query text and table/column definitions. And you have to look back and forth to know which tables are pulled in for checks and which for data. For example, this with just four tables:

  SELECT alpha.*, epsilon.zoo FROM beta INNER JOIN alpha ON beta.id=alpha.foo LEFT JOIN epsilon ON alpha.boing=epsilon.id INNER JOIN gamma ON gamma.id = beta.bar WHERE gamma.baz='bar'
Requires you to understand more about everything and spreads things out more than this equivalent:

  SELECT *, boing.zoo FROM alpha WHERE foo.bar.baz='qux'
(This is based on a real query I was given to work with.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: