I've been using graphql for a project recently and... yeah, I'm not a fan of it. The data is stored relationally and exposed through views, fed through a graph layer, then has to be flattened on the front-end into something that's not far off from the original exposed view. That's a ton of work and really, really messes with front-end experimentation because of all of the work to unpack each graph representation every time.
Something is wrong here. The whole point of GQL is to serve things in exactly the format the front end wants. Even the other negative comments here mention how it is easy to use on the front end.
Perhaps. But just consider that your sibling comments have suggested about 5 different middleware tools that all supposedly do some similar thing. So I may be wrong, but at least four other people are wrong too ;)
This is never the case, every time I use GQL, I always have to reshape the response. GQL only lets you declare the data you want, it does not let you declare the shape in which you want it.
Then that's a problem with the schema implementation and not necessarily a fault of GraphQL. The people implementing a GraphQL schema should be working very closely with the people working on the frontend and put a lot of importance on how they want to consume the data.
GraphQL schemas that basically just expose the data models 1:1 without considering the exact workflows the frontend needs is a terrible implementation and misuse of GraphQL. Might as well just expose the data using REST
Unfortunately this is my experience as well. Generally it's a misalignment of priorities. Since the people writing the endpoint don't have to consume it, they just do whatever is easiest as quickly as possible. And often many are dismissive of frontend concerns when challenged.
Would this solve the problem described? Sounds like the annoying part is solely on the front-end, the unpacking/flattening of what the Postgraphile service returns. From their description, I wouldn't be surprised if they were already using Postgraphile or Hasura as "the graph layer".
Heh, I actually use Hasura and I find it extremely painful to use. It's unbearably fragile to state changes (eg, psql scripts + pg_dump/psql restores) and its errors are inconsistent enough to give you just enough constant false hope that your problem's fixed, but a second step is almost always needed.. and without a helpful error or button that just explains and fixes all the things from a single screen. I realize I'm probably using it wrong, but I really don't think I'm doing anything exceptionally "out there".
> "its errors are inconsistent enough to give you just enough constant false hope that your problem's fixed, but a second step is almost always needed.. and without a helpful error or button that just explains and fixes all the things from a single screen."
There are buttons on the "Settings" screen (/console/settings/metadata-status) you can click that should put your instance back in a working state (and it'll redirect you here by default if your metadata is invalid):
> [DASHBOARD TEXT]: "You have been redirected because your GraphQL Engine metadata is in an inconsistent state. To delete all the inconsistent objects from the metadata, click the "Delete all" button. If you want to manage these objects on your own, please do so and click on the "Reload Metadata" button to check if the inconsistencies have been resolved."
As someone who still builds their personal projects with it -- yeah, the error messages can be kind of opaque if they're related to Hasura's internal metadata/state. For errors that come from external services, those are passed through at least when "HASURA_GRAPHQL_DEV_MODE" is enabled.
> "It's unbearably fragile to state changes (eg, psql scripts + pg_dump/psql restores) ... I realize I'm probably using it wrong, but I really don't think I'm doing anything exceptionally "out there".
Are you dropping tables/columns which have metadata on them? IE, a relationship or permission on a table?
If you have metadata on a resource and then you remove it, without also removing references to it, the effect is the same as if you had tried to drop a table that has foreign keys that reference it in an RDBMS.
Thanks for the response! You're right that that's what I'm doing wrong, though the problem comes from after I recreate those relationships on the RDBMS side. Hasura really struggles to piece together that even though things were tore down, they were brought up in the same way. Having one button to "repair" it would be nice. This mostly happens because I more or less start from scratch on the RDBMS side every time I make a change. I'd do the same on the Hasura side, but tracking relationships (I think that's what it's called) takes about ten minutes to initialize on a relatively small database so I'm forced into making as few changes as possible.
> "but tracking relationships (I think that's what it's called) takes about ten minutes to initialize on a relatively small database"
Oof, this is insane. Should not be the case.
Are you using the Hasura CLI to automatically track any changes made with the web UI to local YAML files? You can use this, along with the ".cli-migrations" variant of the Docker image to automatically apply your metadata/migration as soon as the image starts.
So you'd run "hasura console" in terminal, which would serve the special web UI that mirrors changes to local files, and that'll serve it on http://localhost:9695
Then when you want to start fresh, just docker-compose down/up and it'll handle auto-applying everything for you:
If you use Relay, the graph representation is reasonably unpacked into a state store for you, and you're given the ability to change both the state store and the backend data in one fell swoop.