Hacker Newsnew | past | comments | ask | show | jobs | submit | dantiberian's commentslogin

From the article:

The intention of OrioleDB is not to compete with Postgres, but to make Postgres better. We believe the right long-term home for OrioleDB is inside Postgres itself. Our north star is to upstream what’s necessary so that OrioleDB can eventually be part of the Postgres source tree, developed and maintained in the open alongside the rest of Postgres.


OK, just saved to the file cringespeak.txt:

"Our north star is to..."

:)


Looks like it, based on this video driving on the left side of the road in what looks to be Australia: https://www.youtube.com/watch?v=Fkh3s6WHJz8


I listened to https://www.localfirst.fm/18 recently from Electric-SQL. One of the things James mentioned was that Electric lets you use commodity CDNs for distributing sync data, which takes the load off your main Postgres and servers.

This seems like a good pattern, but of lower value for a SaaS app with many customers storing private data in your service. This is because the cache hit-rate for any particular company's data would be low. Is this an accurate assessment, or did I misunderstand something?


Hey, one of the things here is to define shapes that are shared. If you imagine syncing a shape that is that user’s data then it may be unique. But if you sync, say, one shape per project that that user has access to and a small shape of unique user data then you get shared cache between users who have access to each project.

It’s worth noting that Electric is still efficient on read even if you miss the CDN cache. The shape log is a sequential read off disk.


I'm curious on how you'd configure this. Is it common (and safe) to let a cdn cache private data for authenticated users?

Say Jira used electric, would you be able to put all tickets for a project behind a cdn cache key? You'd need a cdn that is able to run auth logic such as verifying a jwt to ensure you don't leak data to unauthorized users, right?


Yup, you can put an auth proxy in front of the CDN, for example using an edge worker.

See the auth guide: https://electric-sql.com/docs/guides/auth

Some CDNs also validate JWTs, so the CDN can be the proxy part of the Gatekeeper pattern (in the guide).


Another option too for scaling reads is just putting an nginx in your cluster.

Electric itself is quite scalable at reads too so for a SaaS use-case, you might not need any http proxy help.


Will this be partially available from the Claude website for connections to other web services? E.g. could the GitHub server be called from https://claude.ai?


At the moment only Claude Desktop supports MCP. Claude.ai itself does not.


Any idea on timelines? I’d love to be able to have generation and tool use contained within a customer’s AWS account using bedrock. Ie I pass a single cdk that can interface with an exposed internet MCP service and an in-VPC service for sensitive data.


https://where.durableobjects.live is a good website that shows you where they live. Only about 10-11% of Cloudflare PoPs host durable objects. Requests to another PoP to create a DO will get forward to one of the nearby PoPs which do host them.


The issue here is that if company.com does not use Google Workspace and hasn't claimed company.com, then any employee can sign up for a "consumer" Google account using user@company.com.

There are legitimate reasons for this, e.g. imagine an employee at a company that uses Office365 needing to set up an account for Google Adwords.


I don't really understand what this is offering beyond Cloudflare's recent release of running SQLite in durable objects: https://blog.cloudflare.com/sqlite-in-durable-objects/. Is it about providing an external interface to Cloudflare's SQLite databases?


The project is open source (https://github.com/Brayden/starbasedb/blob/main/src/index.ts). Yes to it provides a way to update Cloudflare's SQLite with HTTP.


If that's the case there's libsql (https://github.com/tursodatabase/libsql) which already provides HTTP clients, embedded replicas (essentially good old SQLite but with replication support), self-hosted SQLite server and is maintained by a company using it for their own product.


Some day I really need to learn when to use sqlite in a durable object vs the eventually consistent one (r2).


iirc they are both powered by the same engine to stream and replicate the WAL. I believe R2 is now implemented as a Durable Object backed by SQLite now.


Does this mean that R2 is not "eventually consistent" anymore?

I wonder what are the use cases (and when it's safe) to use "eventually consistent".

I'm guessing that maybe things like social media posts could be fine with "eventually consistent". It's not really critical to have the latest data.

I'm guessing that things like a shopping cart, a user account, a session shouldn't use a "eventually consistent" database.


Still think there is a lot we can add to StarbaseDB to make the developer experience on SQLite databases better, and personally I think it starts with a better external interface. Provide a frictionless way to get started. Then figuring out how to improve developer experiences on how they interface with databases.

Is it auto-accessible REST endpoints? Easy to implement websocket support for the database? Data replication for scaling reads? Offline data syncing? A lot of potential wins for a layer like this to build on.


Could you explain more why you were you not able to sign the URLs at request time? Creating an HMAC is very fast.


I’m going to have to look into this today. I assuming generating the URLs hit an API, but if those can happen fast locally that changes things.


Yup, pre-signing is fast and local, without any I/O. It’s just math. You could likely pre-sign thousands of URLs per second if you needed.


The Surface Laptop scored 1,745 on the Procyon AI Score, while the MacBook Air managed 889. Qualcomm’s Snapdragon X Elite has 45 TOPS of AI acceleration performance, much more than the 18 TOPS found on the M3.

TOPS (Trillion Operations Per Second) is a meaningless score without including the precision. Are they INT4, INT8, INT16 or FP16? Microsoft's qualifications for a Copilot+ laptop require 40 TOPS at INT8: https://www.theregister.com/2024/05/21/qualcomm_windows_micr.... The new Snapdragon X Elite can do 45 TOPS at INT8.

Apple's M3 has 18 TOPS, and their M4 has 38 TOPS, but the M3 was measured at INT16 and the M4 at INT8: https://www.anandtech.com/show/21387/apple-announces-m4-soc-.... Cutting the precision in half lets you do ~twice the amount of work.

The Snapdragon is very impressive, but it's silly to see TOPS claims from companies repeated by journalists without adding some context.


Thanks for pointing that out. I had a look at the Terraform provider but missed that delayHours was hardcoded to 0. I’ll update the article.


You're welcome! Your article taught me exactly what to go looking for.

While we're at it, it also looks like the provider couldn't provision stretched clusters at all until mid-April. I don't know what I think this means for the theory presented in the article. Maybe Uni was new to TF (or even actively onboarding) and paid the beginner's tax? TF is great at turning beginner mistakes into "you deleted your infra." It's an uncomfortable amount of speculation, but it's plausible.

Relevant discussion is on https://github.com/GoogleCloudPlatform/magic-modules/pull/10... and relevant code changes are on https://github.com/hashicorp/terraform-provider-google/pull/...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: