It only works if you are just running queries in console. Any serious work with sql requires all sorts of weird stuff built around it like dbt, cubejs[1].
I don't really think that's true - we write quite a lot of SQL in-house and we have no issues doing it safely. There are some tools we rely on to make our lives easier but these mostly revolve around making batch operations easier to express (like IN(:array) as opposed to having to glue some string joining logic relying on an array count into every individual query). SQL definitely isn't the cleanest thing ever and I have a number of improvements I'd personally really appreciate (changing statement order, trailing commas, better aggregate and window modifier definitions) but doing Serious SQL is definitely an accomplishable thing - even weird statistical bucketing and aggregation that produces a query that's... 291 lines long.
I write highly complex SQL for a living so I don't think I'm the best person to comment on it - I have found that pretty much anything you want to do can be done with pure SQL and the resultant mass is usually going to yield much better performance over mixed solutions... but I haven't used dbt so my comments are more directed at traditional mixed solutions (like sequentially submitted queries being stitched together in C++/PHP or pruning and joining multiple ORM delivered results into a full in memory data set). The way I usually like to write mixed SQL is preparation, execution, cleanup - have a blob of complex logic to preprocess the request into the appropriate SQL recipe - execute said recipe - then go back to imperative programming land to apply any different sourced joins (i.e. combine a DB query result with something coming back from OpenSearch, a memcached query or some other non-relational database source) apply any complex customization and value cleanup (like pulling data out of a JSONB blob and actually sending it as a plain array to the consumer) and then shuffling it off to whoever requested it.
Again, I don't feel comfortable commenting on any tool I haven't personally used and I hope that was helpful - if you have any other questions I'm happy to try and answer.
As someone who uses dbt, I would recommend it for an analytics workflow. The templating is useful for shared code, like a CTE you use in multiple places. That said, you don't need to use a lot of templating for it to be useful.
DBT shines, not because of the language/templating, but because it handles a lot of the scut work of building out a data warehouse. Write the select statements you want to populate the model, write tests to constrain the model, and build.
1. https://cube.dev/docs/