←back to thread

65 points qvr | 2 comments | | HN request time: 0.415s | source
Show context
miffe ◴[] No.44652742[source]
What makes this different from regular md? I'm not familiar with unRAID.
replies(2): >>44652869 #>>44653098 #
wongarsu ◴[] No.44653098[source]
md takes multiple partitions to make a virtual device you can put a file system on, with striping and traditional RAID levels

unRaid takes multiple partitions, dedicates one or two of them to parity, and hands the other partitions through. You can handle those normally, putting different file systems on different partitions in the same array and treating them as completely separate file systems that happen to be protected by the same parity drives

This enables you to easily mix drives of different sizes (as long as the parity drives are at least as large as the largest data partition), add, remove or upgrade drives with relative ease, and means that every read operation only goes to one drive, and writes to that drive plus the parity drives. Depending on how you organize your files you can have drives that are basically never on, while in an md array every drive is used for basically every read or write.

The disadvantages are that you lose out on the performance advantages of a RAID, and that the raid only really protects against losing entire disks. You can't easily repair single blocks the way a zfs RAID could. Also you have a number of file systems you have to balance (which unRaid helps you with, but I don't know how much of that is in this module)

replies(1): >>44653177 #
phoronixrly ◴[] No.44653177[source]
Not sure what you mean by 'easily repair single blocks the way a zfs RAID could', but often the physical devices handle bad blocks, and md has one safety layer on top of this - bad blocks tracking. No relocation in md though, AFAIK.
replies(2): >>44653209 #>>44653306 #
wongarsu ◴[] No.44653306[source]
What I mean is that unraid, zfs and md all allow you to run a scrub over your raid to check for bit rot. That might happen for all kinds of issues, including cosmic rays just flipping bits on the drive platter. The issue is that unraid and md can't do much if they detect a block/stripe where the parity doesn't match the data (because it doesn't know which of the drives suffered a bit flip). Zfs on the other hand can repair the data in that scenario because it keeps checksums.

Now a fairly common scenario is to use unRaid with zfs as the file system for each partition, having Y independent zfs file systems. In that case in theory the information to repair blocks exists: a zfs scrub will tell you which blocks are bad, and you could repair those from parity. And a unraid parity check will do the same for the parity drives. But there is no system to repair single blocks. You either have to dig in and do it yourself or just resilver the whole disk

replies(1): >>44658049 #
1. reginald78 ◴[] No.44658049[source]
The silly part is unraid has all the pieces to do this. The btrfs file system which unraid supports for array disks could identify bitrot, and the unraid array supports virtualizing missing disks by essentially reconstructing the disk from parity and all of the other disks. Combining those two would allow rebuilding a rotted file with features already present.

My impression is unraid developers have kind of ignored enhancing the core feature of their product to much. They seem to have put a lot of effort into ZFS support which isn't that easy to integrate as it isn't part of the kernel, when ZFS isn't really the core draw of their product in the first place.

replies(1): >>44668125 #
2. bayindirh ◴[] No.44668125[source]
I have driven BTRFS for test in the past.

It's too metadata heavy, and is really shines on high IOPS SSDs, it's a no go for spinning drives, esp. if they're external.

RAID5/6 is not still production ready [0], and having a non production ready feature not gated behind a "I know what I'm doing" switch is dangerous. I believe BTRFS' customers are not small fish, but enterprises which protect their data in other ways.

So, I think unraid does the right thing by not doubling down on something half-baked. ZFS is battle tested at this point.

I'm personally building a small NAS for myself and researching the software stack to use. I can't trust BTRFS with my data, esp. in RAID5/6 form, which I'm planning to do.

[0]: https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid5...