Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Open source is unlikely, because such a thing is expensive to develop and there is no business model to support that (commercially or as a story for VCs).

At Quobyte we're building a scalable fault-tolerant file system (POSIX, so it has a different architecture and constraints), but it's closed source.



And that's part of what others are saying. Some class of developers want POSIX, but POSIX is _terrible_ to horizontally scale.

I've tried a couple times at different companies to get developers to stop assuming POSIX and switch to object storage as their persistent byte abstraction layer.

Trying to scale POSIX is a dead-end. Every time I try and see people build it, it's a big pile of suck. For example, CephFS. Yea, it works, but it still sucks compared to object storage. There's just no magic in the universe that will get you around the CAP issues with trying to make POSIX work over a network.


Only the namespace is hard to scale horizontally because of rename; the rest of POSIX does not have inherent scalability issues. For the vast majority of applications, namespace scalability is not an issue (there are only few single applications that store billions of files/objects), so in most cases it's mostly a system management topic.

If you go full object storage semantics (no in-place updates, to appends, no rename, ..) you're pushing the problem to the application layer. Experience at Google was that that's not a good place to have it, because application developers are usually not good at solving distributed systems problems. This lesson learned is the part of the Bigtable (no transactions) to Megastore (application layer transactions on top of Bigtable) to Spanner (transactions built-in) journey.

So yes, of course there are inherent hard trade-offs, but dropping strong semantics from the storage layer by using an object store instead of a file system is usually not a good idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: