there are still fairly hard guarantees about things like consistency that you get with other isolation levels
What do you mean by consistency? I agree that there are many ways to ensure that application integrity constraints are not violated--without using serializability. My point is that, without ACID serializability, you'll have to do some extra work to ensure them in general [1].
The author of the post basically seems to treat any isolation level below serializability as some sort of sham perpetrated on the development community, and that's not the case
Weak isolation has a long tradition spanning decades [2] and is hardly a "sham." It's well known that weak isolation doesn't guarantee the ACID properties as traditionally defined. My point is that many databases don't provide ACID as they promise.
It's still a different world than trying to mimic ACID properties in a NoSQL database
In terms of current offerings, I'm sympathetic to this viewpoint, but you might be surprised how cheaply we can get properties like the transactional atomicity you mention.
In general, I'm curious how easily anyone's able to take their application level requirements and map them down into an isolation like "read committed," especially given how awkwardly they're defined [3] and how many corner cases there are.
Fundamentally, what matters is the set of invariants you want to preserve, and it's usually the case that some number can be preserved for you by the database and some can't and have to be dealt with at the application level. So by "consistency" I mean "some invariants that I care about will be preserved," but that doesn't mean all such invariants are.
For example, if I'm writing a process that merely queries a table to pull back the user's account balance in a single query, read committed isolation might be enough for that particular use case to have "consistency:" I know I'm always seeing some consistent balance, even if it might not reflect transactions that are currently in flight (thus giving me a different answer if I run the query it again in 2 seconds). That's still a better consistency guarantee than if I have a read dirty isolation level (or effectively no isolation), so it's still useful.
If I'm doing an actual update to account balances, however, that level of consistency is no longer good enough, obviously: if all updates hit the same rows in the same tables then snapshot isolation level might be good enough to avoid problems. And if that's not good enough, then I can acquire explicit update locks and such, if the conflict has a risk of update locks across different tables. Even in that case, though, I'll need to worry about application-level invariants (like "a person can only withdraw up to $300 per day.").
So my point is that even without serializable isolation the database can still guarantee some invariants for me, even if it can't guarantee all of them, and that the database can never really guarantee preservation of all of the invariants that matter to me no matter how strong it's guarantees, so I always have to think about what I'm going to handle at the application level anyway.
In the case of my company (which makes applications for insurance companies), we do have to think about those sorts of things, but again, we have to think of a ton of things anyway, and the division of labor between the app tier and the database tier is always something we have to worry about. We do things like build optimistic concurrency into our ORM layer to make most common cases easier to think about, and we have pretty well-defined transaction lifecycles, but for the most complicated cases we have to think about what the potential for race conditions in the database would be, just like we have to think about them at the application level, and then we have to decide how to handle them. Again, even a "true" ACID database wouldn't prevent us from having to do that work, because many of the invariants we want to preserve in the data aren't expressable in the database anyway.
What do you mean by consistency? I agree that there are many ways to ensure that application integrity constraints are not violated--without using serializability. My point is that, without ACID serializability, you'll have to do some extra work to ensure them in general [1].
The author of the post basically seems to treat any isolation level below serializability as some sort of sham perpetrated on the development community, and that's not the case
Weak isolation has a long tradition spanning decades [2] and is hardly a "sham." It's well known that weak isolation doesn't guarantee the ACID properties as traditionally defined. My point is that many databases don't provide ACID as they promise.
It's still a different world than trying to mimic ACID properties in a NoSQL database
In terms of current offerings, I'm sympathetic to this viewpoint, but you might be surprised how cheaply we can get properties like the transactional atomicity you mention.
In general, I'm curious how easily anyone's able to take their application level requirements and map them down into an isolation like "read committed," especially given how awkwardly they're defined [3] and how many corner cases there are.
[1] e.g., http://www.bailis.org/blog/when-is-acid-acid-rarely/#arbitra...
[2] "Granularity of Locks and Degrees of Consistency in a Shared Data Base," Jim Gray et al., 1976 http://diaswww.epfl.ch/courses/adms07/papers/GrayLocks.pdf
[3] e.g., http://www.bailis.org/blog/when-is-acid-acid-rarely/#weak-no...