That's a good question. I suppose that posting the commit makes it incredibly obvious how to exploit the issue, so maybe they wanted to wait a little bit longer for their on-prem users who were slow to patch?
If this is a result of that vulnerability, Ubisoft only have themselves to blame. Our support contacts ensured that we knew about the vulnerability as early as possible and gave us a clear guide to remediation for our self-hosted clusters. Our Atlas clusters were automatically patched before this was announced publicly. You'd have to be running your database open to the internet (already a mistake), ignore the advice to simply turn off zlib, and ignore the fixed versions that have been available for over a week.
If you're going to be in the business of running your own critical infrastructure, you better have spent a lot of effort planning for these situations, because they are inevitable. Otherwise, it's easier to just pay a vendor to do it for you.
Postgres violated serializability on a single node for a considerable amount of time [1] and used fsync incorrectly for 20 years [2]. I personally witnessed lost data on Postgres because of the fsync issue.
Database engineering is very hard. MongoDB has had both poor defaults as well as bugs in the past. It will certainly have durability bugs in the future, just like Postgres and all other serious databases. I'm not sure that Postgres' durability stacks up especially well with modern MongoDB.
I don't remember the numbers (90% is probably a bit exaggerated) but our savings of going from Atlas to MongoDB Community on EC2 several years ago were big.
In addition to direct costs, Atlas had also expensive limitations. For example we often spin up clone databases from a snapshot which have lower performance and no durability requirements, so a smaller non-replicated server suffices, but Atlas required those to be sized like the replicated high performance production cluster.
Was it? Assuming an M40 cluster consists of 3 m6g.xlarge machines, that's $0.46/hr on-demand compared to Atlas's $1.04/hr for the compute. Savings plans or reserved instances reduce that cost further.
Highly doubt that. MongoDB has 5000 well paid employees and is not a big loss making enterprise. If most of the cost was pass through to AWS, they’d not be able to do that. Their quarterly revenue is $500M+ but also spend $200M in sales and marketing and $180M in R&D. (All based on their filings)
Yes, and my point is that this customer switching to running their own MongoDB instances on EC2 like Atlas does would reduce the bill by less than 50% because the rates that they are charging mean that their cut is less than what AWS is getting from this customer.
Maybe it's only me, but I just don't write that much code. I try to change less than 100ish lines per day. I try to keep codebases small. I don't want to run a codebase with hundreds of thousands of lines of code in a production environment.
Theoretically you could write your own client library, but this is nontrivial — this is a distributed database. The client library talks to multiple servers. It's not a "connect to a socket and send commands" type of client library, like in case of SQL servers.
The hard part is that there is no client spec you can follow as a third-party. Everything is implementation-defined. If you're out-of-tree, your code can break at any time. If the FoundationDB project committed to a protocol, client authors could write libraries outside of the main project.
Can you though? The protocol is not very well documented and it seems to iterate rather rapidly with the server version that it aims to be compatible with.
This seems to be a theoretical discussion: I don't think I'd ever want to implement the client part of FoundationDB myself, and I don't really see a good reason to.
MongoDB is a very good database, and these days at scale I am significantly more confident in its correctness guarantees than any of the half-baked Postgres horizontal scaling solutions. I have run both databases at seven figure a month spend scale, and I would not choose off-the-shelf Postgres for this task again.