←back to thread

65 points qvr | 2 comments | | HN request time: 0.414s | source
Show context
miffe ◴[] No.44652742[source]
What makes this different from regular md? I'm not familiar with unRAID.
replies(2): >>44652869 #>>44653098 #
eddythompson80 ◴[] No.44652869[source]
unRAID is geared towards homelab style deployments. Its main advantages over typical RAID is it's flexibility (https://www.snapraid.it/compare):

- It lets you throw JBODs (of ANY size) and you can create a "RAID" over them.

- The biggest drive must be a parity drive(s).

- N parity = surviving N drive failures.

- You can expand your storage pool 1 drive at a time. You need to recalculate parity for the full array.

The actual data is spread across drives. If a drive fails, you rebuild it from the parity. This is another implementation (using MergerFS + SnapRAID) https://perfectmediaserver.com/02-tech-stack/snapraid/

It's a very simple model to think of compared to something like ZFS. You can add/remove capacity AND protection as you go.

Its perf is significantly less than ZFS of course.

replies(6): >>44653100 #>>44653205 #>>44653256 #>>44654189 #>>44654558 #>>44655212 #
phoronixrly ◴[] No.44653205[source]
I have an issue with this though... Won't you get a write on the parity drive for each write on any other drive? Doesn't seem well balanced... to be frank, looks like a good way to shoot yourself in the foot. Have a parity drive fail, then have another drive fail during the rebuild (a taxing process) and congrats -- your data is now eaten, but at least you saved a few hundred dollars by not buying drives of equal size...
replies(5): >>44653221 #>>44653334 #>>44653369 #>>44653437 #>>44653468 #
wongarsu ◴[] No.44653468[source]
You want your drives to fail at different times! Which means you want your load to be unbalanced, from a reliability standpoint. If you put the same load on every drive (like in a traditional RAID5/6) then the drives are likely to fail at around the same time. Especially if you don't go out of your way to get drives from different manufacturing batches. But if you misbalance the amount of work the drives get they accumulate wear and tear at different rates and spend different amounts of time in idle, leading them to fail at wildly different times, giving you ample time to rebuild the raid.

I'd still recommend anyone to have two parity drives (which unraid does support)

replies(1): >>44654403 #
1. riddley ◴[] No.44654403[source]
I often see these discussions and "drive failure" is often mentioned and I wish the phrase was instead "unrecoverable read error" because that's the more accurate phrase. To me, "drive failure" conjures ideas of completely failed devices. An unrecoverable read error can and does happen on our bigger and bigger drives with regularity and will stop most raid rebuilds in their tracks.
replies(1): >>44657419 #
2. wongarsu ◴[] No.44657419[source]
"unrecoverable read error" or "defects" is probably a better framing because it highlights the need to run regular scrubs of your RAID. If you don't search for errors but just wait until the disk no longer powers on you might find out that by then you have more errors than your RAID configuration can recover from