Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(cross-porting from the blog)

> Second [...] In that case, why use 2PC at all? Heuristic decisions are in the XA standard since its first version (1991). YMMV, but they are very often used (to say the least) in 2PC production systems. See for example how Mark Little describes them: http://planet.jboss.org/post/2pc_or_3pc. Not really presented as an optional thing. It shows traditional databases are not that 'CP at all cost' when it comes to managing failure.

> Your conclusion is correct [...] the resulting algorithm is not C or A Yeah... I see 2PC as not partition-tolerant as a partition breaks acid-atomicity. Once partition intolerance is accepted, CA fits well: 2PC is consistent & available until there is a partition. Saying '2PC is not consistent and not available but is partition tolerant' is not false technically but it's a much less accurate description.

> Third [...] datacenters full of commodity servers [...] > redundant NICs and cables, located next to each other > [...] It isn't hard to believe that you could build a system, like that, > in which you could add algorithms that were CA

I just totally agree with you here. CAP as a categorization tool is used for all types of distributed systems, but there is a huge difference between an application built on commodity hardware running in a multi-dc config and a 2 nodes system running on redundant hardware in a single rack. Typically, 2PC is historically used on very specific applications: few nodes, a limited number of operations between the nodes, expensive redundant hardware all over the place, limited scaling out needs (if any). Not your typical big data system.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: