Author here, thanks for the heads up. It doesn't look like mkfile is included with the Jessie base system I was using to test. For those wishing to use this method it looks like the xfsprogs package contains xfs_mkfile which operates the same as parent's example.
there is also 'truncate'. Especially nice about truncate is the ability to create sparse files. So if you want to test how big a pool would be with three 1TB disks without actually using 3TB of allocated space: 'truncate -s 1T file1.img'. voila, a 1tb file that only takes kilobytes upon creation of a zpool.
> For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeros.
Sorry, I deleted my comment about the same time you replied to it. I realized both you and TFA were discusing testing setups, not actual hard drive redundancy setups.
One thing to note: the '-o ashift' option is a per-drive (vdev) attribute according to the manual. So, when replacing a vdev, be sure to specify this option again if you do not use the default:
Another nice property of ZFS mirroring is the read performance. Since data integrity is verified by checksums rather than by reading the data back from all disks and comparing it, the disks within a ZFS mirror are able to serve read requests in parallel.
Also, standard linux raid (mdadm) will allow 3 or more disks in RAID 1 and will read from all of them for better performance. ZFS has a lot of overhead compared to mdadm+LVM but it depends on your specific use case.