Hacker Newsnew | past | comments | ask | show | jobs | submit | computerfan494's commentslogin

That's a good question. I suppose that posting the commit makes it incredibly obvious how to exploit the issue, so maybe they wanted to wait a little bit longer for their on-prem users who were slow to patch?


Posting the CVE and then the patch is the reverse of this.


By "patch" I am talking about the public commit. Updated binaries were made available when the CVE was published.


That's not what the blog post implies given they only told people how to update aftwards.


The author of this post is incorrect about the timeline. Our Atlas clusters were upgraded days before the CVE was announced.


thanks! updated


If this is a result of that vulnerability, Ubisoft only have themselves to blame. Our support contacts ensured that we knew about the vulnerability as early as possible and gave us a clear guide to remediation for our self-hosted clusters. Our Atlas clusters were automatically patched before this was announced publicly. You'd have to be running your database open to the internet (already a mistake), ignore the advice to simply turn off zlib, and ignore the fixed versions that have been available for over a week.

If you're going to be in the business of running your own critical infrastructure, you better have spent a lot of effort planning for these situations, because they are inevitable. Otherwise, it's easier to just pay a vendor to do it for you.


We received communication that all Atlas clusters were upgraded with the fix before the vulnerability was announced.


Postgres violated serializability on a single node for a considerable amount of time [1] and used fsync incorrectly for 20 years [2]. I personally witnessed lost data on Postgres because of the fsync issue.

Database engineering is very hard. MongoDB has had both poor defaults as well as bugs in the past. It will certainly have durability bugs in the future, just like Postgres and all other serious databases. I'm not sure that Postgres' durability stacks up especially well with modern MongoDB.

[1] https://jepsen.io/analyses/postgresql-12.3

[2] https://archive.fosdem.org/2019/schedule/event/postgresql_fs...


Thanks for adding that - I wasn't aware.


Most of the cost in their bill wasn't from MongoDB, it was cost passed on from AWS


I don't remember the numbers (90% is probably a bit exaggerated) but our savings of going from Atlas to MongoDB Community on EC2 several years ago were big.

In addition to direct costs, Atlas had also expensive limitations. For example we often spin up clone databases from a snapshot which have lower performance and no durability requirements, so a smaller non-replicated server suffices, but Atlas required those to be sized like the replicated high performance production cluster.


Was it? Assuming an M40 cluster consists of 3 m6g.xlarge machines, that's $0.46/hr on-demand compared to Atlas's $1.04/hr for the compute. Savings plans or reserved instances reduce that cost further.


There's definitely MongoDB markup, but a full 33% of their bill was AWS networking costs that have nothing to do with Atlas.


Highly doubt that. MongoDB has 5000 well paid employees and is not a big loss making enterprise. If most of the cost was pass through to AWS, they’d not be able to do that. Their quarterly revenue is $500M+ but also spend $200M in sales and marketing and $180M in R&D. (All based on their filings)


You can look at this particular bill and observe that more than 50% of the cost was going to AWS.


If they’re a reseller of AWS, which they will be, they decide the rates that get charged.


Yes, and my point is that this customer switching to running their own MongoDB instances on EC2 like Atlas does would reduce the bill by less than 50% because the rates that they are charging mean that their cut is less than what AWS is getting from this customer.


I can assure you that Stripe does not regret the decision.


Interesting, a lot of ex stripe eng I talk to say the opposite, but that they made it work.


Maybe it's only me, but I just don't write that much code. I try to change less than 100ish lines per day. I try to keep codebases small. I don't want to run a codebase with hundreds of thousands of lines of code in a production environment.


FoundationDB is very cool, but I wish it didn't require linking in their C library to talk to it. The client story is not good.


Theoretically you could write your own client library, but this is nontrivial — this is a distributed database. The client library talks to multiple servers. It's not a "connect to a socket and send commands" type of client library, like in case of SQL servers.


The hard part is that there is no client spec you can follow as a third-party. Everything is implementation-defined. If you're out-of-tree, your code can break at any time. If the FoundationDB project committed to a protocol, client authors could write libraries outside of the main project.


Can you though? The protocol is not very well documented and it seems to iterate rather rapidly with the server version that it aims to be compatible with.


You might be able to, but are definitely not supposed to. The client is conceptually "part of the cluster".


Are you planning to?

This seems to be a theoretical discussion: I don't think I'd ever want to implement the client part of FoundationDB myself, and I don't really see a good reason to.


I'm not planning to because there is no documented protocol. If there were, I might! As a result, I can't use FoundationDB.


MongoDB is a very good database, and these days at scale I am significantly more confident in its correctness guarantees than any of the half-baked Postgres horizontal scaling solutions. I have run both databases at seven figure a month spend scale, and I would not choose off-the-shelf Postgres for this task again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: