There is no 'modern' ZFS-like fs in Linux nowadays.
There is no 'modern' ZFS-like fs in Linux nowadays.
I spent some time researching this topic, and in all benchmarks I've seen and my personal tests btrfs is faster or much faster: https://www.reddit.com/r/zfs/comments/1i3yjpt/very_poor_perf...
"Note that increasing iodepth beyond 1 will not affect synchronous ioengines"[1]
Is there a reason you used that ioengine as opposed to, for example, "libaio" with a "--direct=1" flag?
[1] https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-...
A ZFS pool will remain available even in degraded mode, and correct me if I'm wrong but with BTRFS you mount the array through one of the volume that is part of the array and not the array itself.. so if that specific mounted volume happens to go down, the array becomes unavailable unmounted until you remount another available volume that is part of the array which isn't great for availability.
I thought about mitigating that by making an mdadm RAID1 formatted with BTRFS and mount the virtual volume instwad, but then you lose the ability to prevent bit rot, since BTRFS lose that visibility if it doesn't manage the array natively.
I don't think btrfs has a concept of having only some subvolumes usable. Either you can mount the filesystem or you can't. What may have confused you is that you can mount a btrfs filesystem by referring to any individual block device that it uses, and the kernel will track down the others. But if the one device you have listed in /etc/fstab goes missing, you won't be able to mount the filesystem without fixing that issue. You can prevent the issue in the first place by identifying the filesystem by UUID instead of by an individual block device.
AFAIU, btrfs effectively absolves itself of responsibility in these cases, claiming the issue is buggy drive firmware.