I'm not tired of reminding everyone that "conflict resolution" is no more than an euphemism for "breaking durability by dropping already committed and acknowledged data".
Either architect for no data overlap on writes across all the "actives" (in which case software like pgactive could be a good deal) or use a purely distributed database (like Yugabyte).
I could see in the docs they recommended a scenario like: each master is the only writer for a given schema, to avoid conflicts, but the replication gives them all a copy of all the schemas to read from.
And I was wondering what other ways, besides schemas, of dividing up 'writer responsibility' would also work? Partitions?
For inserts is "easy" if there's no possibility to write the same PK value twice (e.g. PK are random values like uuids or include as part of a natural key a identifier from the writer that's unique -shard id, region id, you name it-).
Once you have done that, for updates and deletes you need to keep the same rule (i.e. don't update "foreign" rows).
If you do this, no other technique is needed. Partitions, however, are potentially a good technique to enforce some of these invariants, which gives us quick understanding of where data is originating from given the table name. Same could apply to schemas.
Last time I checked, Firecracker didn't have a very compelling I/O story, which made it in my opinion not completely adequate for running Postgres (or any other database).
In contrast, other similar VMM seem to have a better one, like Cloud Hypervisor [1]. Why then FC and not CH? (I've nothing against FC, actually love it and have been using it, but it appears not being the best I/O wise).
> Firecracker didn't have a very compelling I/O story
Can you provide any sources for this claim? We're running Firecracker in production over at blacksmith dot sh and haven't been able to reproduce any perf regressions in Firecracker over CH in our internal benchmarking.
The major tradeoff with firecracker is a reduction in runtime performance for a quick boot time (if you actually need that - this obviously doesn't work if your app takes seconds to boot). There are quite a lot of other tradeoffs too like 'no gpu' because that needs some of the support that they remove to make things boot fast. That's why projects like 'cloud hypervisor' exist.
That issue is from 2020 and has already been addressed. The fact that io_uring support is not "GA" is mostly a semantic aspect that doesn't affect most use-cases.
I was thinking of a similar approach, but mounting /nix/store from the host into the guest will only work if you have a single guest.
For multiple guests, you should rely instead on:
* A snapshot-able filesystem with the option to create clones (like ZFS). I think this is a great idea actually.
* Exporting /nix/store via NFS, so you can have multiple writers (but this creates some tight coupling in that accidentally deleting stuff there may disrupt all guests).
mounting /nix/store from the host into the guest works with as many guests as you like - this is what the nixos tests framework does all the time, you just need a different "system" derivation for each (different) VM.
The problem with that is that the VM can see all the needless software, so if your goal is isolation, having a smaller closure is much better from a security point of view: if there's no coreutils bash etc then there's no risk of getting a shell spawned by an attack...
AGPL is fully open source, and definitely allows you to host it without open sourcing anything of your code. That's one of the very freedoms that the open source definition contains.
Either architect for no data overlap on writes across all the "actives" (in which case software like pgactive could be a good deal) or use a purely distributed database (like Yugabyte).
reply