Hacker News new | past | comments | ask | show | jobs | submit login

I would think the opposite. Presumably there's zero contention so no need to worry about concurrency control like locking. You're just paging linear blocks of memory to disk and the bookkeeping remains in local host memory, so all you're doing is sending and receiving fixed, independent blocks of memory that are opaque to the remote host. And prior to v4 NFS was UDP and stateless, so the code to do this from the kernel could be kept quite simple.

The functional equivalent to this is iSCSI, which as far as I understand is similar to if not literally what AWS uses under the covers for EC2 block devices.




> The functional equivalent to this is iSCSI, which as far as I understand is similar to if not literally what AWS uses under the covers for EC2 block devices.

Back when disk-SSD hybrid storage was still a novel idea (~2006), one of the guys at Sun had an interesting demo with ZFS.

He first created a bunch of storage pools on a server on the east coast from iSCSI volumes on the west coast, install Postgres, and did a benchmark. He next created a pool with the same iSCSI pools but added a local SSD as a ZIL/zlog device (for caching), and ran the same benchmark.

The SSD-enabled pool has quite close to local-disk performance of a SAN.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: