Most VSCode extensions are pure slop, to the point where you’re almost certainly better off using any other option for tools where available.
And I don’t mean slop in the new “AI slop” sense of the word, but more “ostensibly supposed to do something specialized but in practice not particularly effective, well documented, or useful”. The entire extension ecosystem is hot garbage.
Short answer: the core LinkedQL live query engine runs on the backend today, and there’s an embeddable variant (FlashQL) that runs directly in the frontend with the same LinkedQL capabilities – live queries, DeepRefs, etc.
1. Pure frontend / local data
For data that can live entirely on the client, you can spin up an in-browser FlashQL instance:
const client = new FlashQL(); // runs in the page / worker
await client.query(`
CREATE TABLE users (
id UUID PRIMARY KEY,
name TEXT
)
`);
// Live query works the same way as on the backend:
const result = await client.query(
'SELECT * FROM users',
{ live: true }
);
From there, result is a live result set: inserts/updates/deletes that match the query will show up in the rows, and all the same features (live queries, DeepRefs, etc.) behave as they do on a backend instance.
At the moment FlashQL is in-memory only; persistence backends like IndexedDB / LocalStorage are on the roadmap.
2. Remote database from the frontend
If your source of truth is a remote Postgres/MySQL instance, the model we’re building is:
a LinkedQL engine next to the database, and
a FlashQL instance in the frontend that federates/syncs with that backend engine.
Implementing rate limiting in a scalable manner is harder than just making logging faster, and it makes the cloud provider money whereas rate limiting causes them to make less money. The incentive makes it pretty clear what side of the tradeoff cloud providers are going to come down on.
Last month, a single container in my GKE cluster (Sao Paulo region) entered an error loop, outputting to stdout at ~2k logs/second.
I discovered the hard way that GKE's default behavior is to ingest 100% of this into Cloud Logging with no rate limiting.
My bill jumped nearly 1000% before alerts caught it.
It might sound smart your defense of Google but VSCode is not a fork or electron and chromium, there's a lot of work there, not just clicking a fork button
For me it's about the value-to-token ratio feeling off, as I stated in another comment.
> What bugs me most is that many of those 340M tokens feel wasteful? Like the LLM will use 50k tokens exploring dead ends before finding a solution that could have been expressed in 5k tokens. The productivity gain is real, but it feels like I'm paying 10x more than what should be "fair" for the actual value delivered.
reply