Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Correct—I reached out to Adam to ask if Ars could syndicate the piece, and then I did some minor cleanup and clarification editing on it (mostly style conformance, but also some minor grammar tweaks and a few sentence re-writes). After checking with him to make sure my changes didn't change anything substantive, we ran the piece this AM.

Link wherever you'd like, of course, but the more traffic this pulls in, the more ammo I have to be able to get Adam contributing to Ars as a regular freelancer!

(edit - hi, adam!)

(edit^2 - corrections corrected. Apologies for the errors. I am just a simple caveman. Your mathematics confuse and annoy me!)



If you want to publish more ZFS articles, I'd love to read about following topics from Adam and/or members of OpenZFS:

- using DTrace for ZFS operations insight

- state of OpenZFS, comparing illumos, FreeBSD, NetBSD and Linux

- myths and/or often cited problems (ECC, COW unfit for DB load, VM images etc.)

- ZFS version and feature flags wrt portability between illumos and FreeBSD versions

- pool flexibility work that will make it easier to remove devices

- comparison to btrfs, hammer{1,2} and flash filesystems

- the topic of zfs pointer rewrite

- garbage collection and safely removing traces of a file/directory in a COW fs

- rebalancing story compared to HAMMER and future work in this space

- built-in ZFS encryption (independent of system crypto volume support)

I should say that I'd only support an article like that if Ars allows parts of the written text to be incorporated into OpenZFS wiki/documenation.


Some of these are probably a little too deep to get much traction on the ars front page, but there are some solid ideas here (especially the oft-sited problems one). Thanks for the feedback!

We did run a big piece by Jim Salter a couple of years ago on next-gen file systems that focused on ZFS and btrfs (http://arstechnica.com/information-technology/2014/01/bitrot...), but yeah, I'd love to have more filesystem-level stuff showing up. The response is generally very, very strong—turns out people really like reading about file systems when the authors know what they're talking about!

edit -

> I should say that I'd only support an article like that if Ars allows parts of the written text to be incorporated into OpenZFS wiki/documenation.

That's more complicated, unfortunately. I am not a lawyer etc etc and I am only speaking generally here, but Ars and CN own the copyright on the pieces we run (though syndications like Adam's piece today are different), and wholesale reuse of the text without remuneration isn't something that the CN rights management people like. Fair use is obviously fine, so quoting portions of pieces as sources in documentation is not a problem, but re-using most or all of something isn't (necessarily or usually) fair use.

(again, not a lawyer, my words aren't gospel, don't take my word for it, etc etc)


I'm also not a lawyer, but my thought process is like this: in the open source spirit, given this is not a book to be profited from, and profiting from technical books is very hard anyway, developers of some software could contribute technical content which then instead of compensation would only get editor time and in return be allowed to be included in the project's documentation. Real World Haskell, Real World OCaml somehow managed to convince the publisher this is fine. Again, IANAL, just thinking out loud.


Allan Jude might be a viable candidate for some of the content, if he'd like to.

Also, Dr. McKusick documented some of the internals in his living FreeBSD kernel book, but as you said, this might be beyond Ars's scope.

Though, I've seen some deep technical content on Ars, so why not give it a try.


There's still some technical errors- calling 1TB 2^30 bytes for example. (It's 2^40 bytes)


Good catch; thanks.


And it's 1024^4 not ^3.


I used to think that (1024 vs 1000), but not any more. There's ancient precedent for the other interpretation.

For example, when I was a kid growing up in NY, one of the local radio stations I listened to was WPLJ, 95.5 MHz. That's 95,500,000 cycles per second, not 100,139,008.

Go back nearly 100 years, the Chicago area got a radio station called WLS[1], one of the original clear channel stations. It broadcasts at 870 KHz. That's 870,000 cycles per second, not 890,880.

Much as computer people would like "kilo", "mega", "giga" etc to mean 1024^(whatever), there's a lot of precedent for doing things the old fashioned way!

As Wikipedia explains: tera-, from Greek word "terastios"="huge, enormous", a prefix in the SI system of units denoting 10^12, or 1 000 000 000 000

SI is a well accepted standard. Just because it's more logical for chip designers to implement memory chips using powers of 1024 isn't a good enough reason to ignore SI.

[1] https://en.wikipedia.org/wiki/WLS_(AM)


I would hardly call it old fashioned. SI prefixes were just mis-applied to storage sizes, hence the more correct use of kibibyte (kiB) [0], mebibyte (MiB), etc. Hence also Apple's somewhat recent switch to using 1 kB = 10^3 bytes.

[0] https://en.wikipedia.org/wiki/Kibibyte


This sentence has a plural snapshots where singular makes more sense: APFS brings a much-desired file system feature: snapshots. A snapshots lets you freeze the state of a file system at a particular moment and continue to use and modify that file system while preserving the old data.

That or rewriting it for the plural.


Take note that Adam explained the limitations of APFS's current snapshot implementation and how it falls short in some important regards.


That may be so, but:

    A snapshots lets you freeze the state of a file system
Doesn't really read well. A snapshots as a construction in English even in context really grated as I read it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: