Hacker Newsnew | past | comments | ask | show | jobs | submit | ahachete's commentslogin

I'm not tired of reminding everyone that "conflict resolution" is no more than an euphemism for "breaking durability by dropping already committed and acknowledged data".

Either architect for no data overlap on writes across all the "actives" (in which case software like pgactive could be a good deal) or use a purely distributed database (like Yugabyte).


I could see in the docs they recommended a scenario like: each master is the only writer for a given schema, to avoid conflicts, but the replication gives them all a copy of all the schemas to read from.

And I was wondering what other ways, besides schemas, of dividing up 'writer responsibility' would also work? Partitions?


For inserts is "easy" if there's no possibility to write the same PK value twice (e.g. PK are random values like uuids or include as part of a natural key a identifier from the writer that's unique -shard id, region id, you name it-).

Once you have done that, for updates and deletes you need to keep the same rule (i.e. don't update "foreign" rows).

If you do this, no other technique is needed. Partitions, however, are potentially a good technique to enforce some of these invariants, which gives us quick understanding of where data is originating from given the table name. Same could apply to schemas.

RLS may also help enforce these invariants.


> And here for AWS multi-AZ losing data: https://news.ycombinator.com/item?id=4115937

That link is from 2012, 13 years ago.

AFAIK as of today AWS provides strong guarantees about the AZs being physically, network, and power isolated of each other.


They certainly say that. But it might turn out to be a lie, just like it did with OVH and Google.


I think that there's enough evidence that AZs are kms apart.


> Most notably, 'null'::jsonb = 'null'::jsonb is true whereas NULL = NULL is NULL

Because 'null' in the JSON spec is a literal value (a constant), not SQL's NULL. Nothing to see here.

https://datatracker.ietf.org/doc/html/rfc7159


Right--it makes sense and shouldn't be changed, but it's a bit unintuitive for newcomers.


Last time I checked, Firecracker didn't have a very compelling I/O story, which made it in my opinion not completely adequate for running Postgres (or any other database).

In contrast, other similar VMM seem to have a better one, like Cloud Hypervisor [1]. Why then FC and not CH? (I've nothing against FC, actually love it and have been using it, but it appears not being the best I/O wise).

[1]: https://github.com/cloud-hypervisor/cloud-hypervisor


> Firecracker didn't have a very compelling I/O story

Can you provide any sources for this claim? We're running Firecracker in production over at blacksmith dot sh and haven't been able to reproduce any perf regressions in Firecracker over CH in our internal benchmarking.


The major tradeoff with firecracker is a reduction in runtime performance for a quick boot time (if you actually need that - this obviously doesn't work if your app takes seconds to boot). There are quite a lot of other tradeoffs too like 'no gpu' because that needs some of the support that they remove to make things boot fast. That's why projects like 'cloud hypervisor' exist.



That issue is from 2020 and has already been addressed. The fact that io_uring support is not "GA" is mostly a semantic aspect that doesn't affect most use-cases.


I was thinking of a similar approach, but mounting /nix/store from the host into the guest will only work if you have a single guest.

For multiple guests, you should rely instead on: * A snapshot-able filesystem with the option to create clones (like ZFS). I think this is a great idea actually. * Exporting /nix/store via NFS, so you can have multiple writers (but this creates some tight coupling in that accidentally deleting stuff there may disrupt all guests).


mounting /nix/store from the host into the guest works with as many guests as you like - this is what the nixos tests framework does all the time, you just need a different "system" derivation for each (different) VM.

The problem with that is that the VM can see all the needless software, so if your goal is isolation, having a smaller closure is much better from a security point of view: if there's no coreutils bash etc then there's no risk of getting a shell spawned by an attack...


AGPL is fully open source, and definitely allows you to host it without open sourcing anything of your code. That's one of the very freedoms that the open source definition contains.


Lenovo and Dell have some laptops with Linux, and they are very good ones.

(not sure if you meant rugged ones, that may not be the case, but I guess this is a tiny percentage of the market)


Crowdstrike also has an Ubuntu Linux version. We're required to install it at work.


Interested in contributing/co-writing such a runbook for StackGres? [1]

[1]: https://stackgres.io/doc/latest/runbooks/


AGPL is the best OSS license to ensure project continuity as Open Source. That simple.

Permissive licenses allow for proprietary forks, which may become more successful than the upstream project.

AGPL would be able to benefit from any improvements from any fork, and all those will remain OSS for everyone.

Nothing written here is related in any way to monetization.


Congratulations!

I also love writing AWS API-compatible services. That's why I did Dyna53 [1] ;P

(I know, unrelated, but hopefully funny)

[1] https://dyna53.io


It is fun, indeed. Could be a candidate for the best technology abuse of the year.


This is an absolutely unhinged project idea. Great work


This is really fun!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: