Most active commenters
  • hammyhavoc(4)

←back to thread

65 points qvr | 11 comments | | HN request time: 0.001s | source | bottom
Show context
miffe ◴[] No.44652742[source]
What makes this different from regular md? I'm not familiar with unRAID.
replies(2): >>44652869 #>>44653098 #
wongarsu ◴[] No.44653098[source]
md takes multiple partitions to make a virtual device you can put a file system on, with striping and traditional RAID levels

unRaid takes multiple partitions, dedicates one or two of them to parity, and hands the other partitions through. You can handle those normally, putting different file systems on different partitions in the same array and treating them as completely separate file systems that happen to be protected by the same parity drives

This enables you to easily mix drives of different sizes (as long as the parity drives are at least as large as the largest data partition), add, remove or upgrade drives with relative ease, and means that every read operation only goes to one drive, and writes to that drive plus the parity drives. Depending on how you organize your files you can have drives that are basically never on, while in an md array every drive is used for basically every read or write.

The disadvantages are that you lose out on the performance advantages of a RAID, and that the raid only really protects against losing entire disks. You can't easily repair single blocks the way a zfs RAID could. Also you have a number of file systems you have to balance (which unRaid helps you with, but I don't know how much of that is in this module)

replies(1): >>44653177 #
1. phoronixrly ◴[] No.44653177[source]
Not sure what you mean by 'easily repair single blocks the way a zfs RAID could', but often the physical devices handle bad blocks, and md has one safety layer on top of this - bad blocks tracking. No relocation in md though, AFAIK.
replies(2): >>44653209 #>>44653306 #
2. hammyhavoc ◴[] No.44653209[source]
If you have a redundant dataset (#1 reason to use ZFS replication) then you can repair a ZFS dataset.
replies(2): >>44653249 #>>44653317 #
3. phoronixrly ◴[] No.44653249[source]
I'm sorry, I still don't quite follow... If you have a RAID5, you can repair a drive failure... Weren't we talking about handling 'blocks'? Is it bad blocks or bad block devices (a.k.a. dead drives)?
4. wongarsu ◴[] No.44653306[source]
What I mean is that unraid, zfs and md all allow you to run a scrub over your raid to check for bit rot. That might happen for all kinds of issues, including cosmic rays just flipping bits on the drive platter. The issue is that unraid and md can't do much if they detect a block/stripe where the parity doesn't match the data (because it doesn't know which of the drives suffered a bit flip). Zfs on the other hand can repair the data in that scenario because it keeps checksums.

Now a fairly common scenario is to use unRaid with zfs as the file system for each partition, having Y independent zfs file systems. In that case in theory the information to repair blocks exists: a zfs scrub will tell you which blocks are bad, and you could repair those from parity. And a unraid parity check will do the same for the parity drives. But there is no system to repair single blocks. You either have to dig in and do it yourself or just resilver the whole disk

replies(1): >>44658049 #
5. hammyhavoc ◴[] No.44653317[source]
Hit the comment depth limit (so annoying), but the comment about repairing blocks means that you can repair bitrot/corruption/malicious changes/whatever down to the block level of a ZFS dataset if you have a redundant replicated dataset.

The magic of ZFS repairs isn't in RAID itself, IMO, it's in being able to take your cold replicated dataset, e.g., from LTO, an external disk, remote server etc, and repair any issues without needing to resilver, stress the whole array, interrupt access, or hurt performance.

RAID can correct issues, yes, but ZFS as a filesystem can repair itself from redundant datasets. Likewise, you can mount the snapshots like Apple Time Machine and get back specific versions of individual files.

I wish HN didn't limit comment depth as these are great questions and this is heavily under-discussed, but it's arguably the best reason to run ZFS, IMO.

Another way of putting this—you don't need a RAID array, you can do individual ZFS disks and still replicate and repair them. There's no limits to how many replicas or mediums you use either. It's quite amazing for self-healing problems with your datasets.

replies(2): >>44654027 #>>44654505 #
6. aspenmayer ◴[] No.44654027{3}[source]
> Hit the comment depth limit (so annoying)

I think it’s actually a flamewar detector that you may be hitting. In any case, next time try selecting the timestamp of the comment which you wish to reply to; this works when the reply button is missing and the comment isn’t [dead] or [flagged][dead] iirc.

replies(1): >>44691654 #
7. tomhow ◴[] No.44654505{3}[source]
The rate limiter is only applied to accounts that post too many comments that are of low-quality or break the guidelines. We're always open to turning off the rate limiter on an account but we need to see that the user has shown a sincere intent to use HN as intended over a reasonable period of time.
replies(1): >>44691660 #
8. reginald78 ◴[] No.44658049[source]
The silly part is unraid has all the pieces to do this. The btrfs file system which unraid supports for array disks could identify bitrot, and the unraid array supports virtualizing missing disks by essentially reconstructing the disk from parity and all of the other disks. Combining those two would allow rebuilding a rotted file with features already present.

My impression is unraid developers have kind of ignored enhancing the core feature of their product to much. They seem to have put a lot of effort into ZFS support which isn't that easy to integrate as it isn't part of the kernel, when ZFS isn't really the core draw of their product in the first place.

replies(1): >>44668125 #
9. bayindirh ◴[] No.44668125{3}[source]
I have driven BTRFS for test in the past.

It's too metadata heavy, and is really shines on high IOPS SSDs, it's a no go for spinning drives, esp. if they're external.

RAID5/6 is not still production ready [0], and having a non production ready feature not gated behind a "I know what I'm doing" switch is dangerous. I believe BTRFS' customers are not small fish, but enterprises which protect their data in other ways.

So, I think unraid does the right thing by not doubling down on something half-baked. ZFS is battle tested at this point.

I'm personally building a small NAS for myself and researching the software stack to use. I can't trust BTRFS with my data, esp. in RAID5/6 form, which I'm planning to do.

[0]: https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid5...

10. hammyhavoc ◴[] No.44691654{4}[source]
Thanks!
11. hammyhavoc ◴[] No.44691660{4}[source]
I've been using HN since 2018, and whilst I'm a bit rough around the edges, I generally interact with the best of intentions as long as my blood glucose is within range (which, with a CGM, is more than at any other point in my life).