Most active commenters
  • phoronixrly(3)
  • eddythompson80(3)

←back to thread

65 points qvr | 16 comments | | HN request time: 0.001s | source | bottom
Show context
miffe ◴[] No.44652742[source]
What makes this different from regular md? I'm not familiar with unRAID.
replies(2): >>44652869 #>>44653098 #
eddythompson80 ◴[] No.44652869[source]
unRAID is geared towards homelab style deployments. Its main advantages over typical RAID is it's flexibility (https://www.snapraid.it/compare):

- It lets you throw JBODs (of ANY size) and you can create a "RAID" over them.

- The biggest drive must be a parity drive(s).

- N parity = surviving N drive failures.

- You can expand your storage pool 1 drive at a time. You need to recalculate parity for the full array.

The actual data is spread across drives. If a drive fails, you rebuild it from the parity. This is another implementation (using MergerFS + SnapRAID) https://perfectmediaserver.com/02-tech-stack/snapraid/

It's a very simple model to think of compared to something like ZFS. You can add/remove capacity AND protection as you go.

Its perf is significantly less than ZFS of course.

replies(6): >>44653100 #>>44653205 #>>44653256 #>>44654189 #>>44654558 #>>44655212 #
1. phoronixrly ◴[] No.44653205{3}[source]
I have an issue with this though... Won't you get a write on the parity drive for each write on any other drive? Doesn't seem well balanced... to be frank, looks like a good way to shoot yourself in the foot. Have a parity drive fail, then have another drive fail during the rebuild (a taxing process) and congrats -- your data is now eaten, but at least you saved a few hundred dollars by not buying drives of equal size...
replies(5): >>44653221 #>>44653334 #>>44653369 #>>44653437 #>>44653468 #
2. hammyhavoc ◴[] No.44653221[source]
No, because you have a cache pool and calculate the parity changes on a schedule, or when specific conditions are met, e.g., remaining available storage on the cache pool.

The cache pool is recommended to be mirrored for this reason (not many people see why I find this to be amusing).

replies(2): >>44653300 #>>44653327 #
3. phoronixrly ◴[] No.44653300[source]
And let me guess, the cache pool is suggested to be on an SSD?

> Increased perceived write speed: You will want a drive that is as fast as possible. For the fastest possible speed, you'll want an SSD

Great, now I have an SSD that is treated as a consummative and will die and need to be replaced. Oh and btw you are going to need two of them if you don't want to accidentally your data.

The alternative? Have the cache on a pair of spinning rust drives which will again be overloaded and are expected to fail earlier and need to be replaced while also having the benefit of being slow... But at least you won't have to go through a full rebuild after a cache drive failure.

Man, I am not sold on the cost savings of this approach at all... Let alone the complexity and moving parts that can fail...

replies(3): >>44653445 #>>44655367 #>>44657437 #
4. hammyhavoc ◴[] No.44653327[source]
Hit the comment depth limit again, but yes, SSDs!

Yes, Unraid can crash-and-burn in quite a lot of different ways. Ask me how I know! Why I'm all-in on ZFS now.

5. eddythompson80 ◴[] No.44653334[source]
> Have a parity drive fail, then have another drive fail during the rebuild (a taxing process) and congrats -- your data is now eaten

That's just your drive failure tolerance. It's the same risk/capacity trade as RAIDZ1, but with less performance and more flexibility on expanding. Which is exactly what I said.

If 1 drive failure isn't acceptable for you, you wouldn't use RAIDZ1 and wouldn't use 1 parity drive.

You can use 2 parity drives for RAIDZ2-like protection.

You can use 3 drives for RAIDZ3-like protection.

You can use 4 drives, 10 drives. Add and remove as many parity/capacity as you want. Can't do that with RAID/RAIDZ easily.

You manage your own risk/reward ratio

replies(1): >>44653442 #
6. nodja ◴[] No.44653369[source]
The wear on the parity drive is the same regardless of raid technology you choose, unraid just lets you have mismatched data drives. In fact you could argue that unraid is healthier for the drives since a write doesn't trigger a write on all drives, just 2. The situation you described is true for any raid system.
7. dawnerd ◴[] No.44653437[source]
Depends. If you use a cache like they recommend you’d only get parity writes when it runs its mover command. Definitely adds a lot of wear but so far i haven’t had any parity issues with two parity drives protecting 28 drives.
8. phoronixrly ◴[] No.44653442[source]
My issue is that due to uneven load balancing, the parity drive is going to fail more often than in a configuration with distributed parity, thus you are going to need to recalculate parity for the array more often, which is a risky and taxing operation for all drives in the array.

As hammyhavoc below noted, you can work around this by having cache, and 'by deferring the inevitable parity calculation until a later time (3:40 am server time, by default)'.

Which seems like a hell of a bodge -- both risky, and expensive -- now the unevenly balanced drive is the cache one, it is also not parity protected. So you need mirroring for it in case you don't want to lose your data, and the cache drives are still expected to fail before a drive in an evenly load-balanced array, so you're going to have to buy new ones?

Oh and btw you are still at risk of bit flips and garbage data due to cache not being checksum-protected.

replies(2): >>44653579 #>>44653671 #
9. dawnerd ◴[] No.44653445{3}[source]
But you’d have that problem on any system really.
10. wongarsu ◴[] No.44653468[source]
You want your drives to fail at different times! Which means you want your load to be unbalanced, from a reliability standpoint. If you put the same load on every drive (like in a traditional RAID5/6) then the drives are likely to fail at around the same time. Especially if you don't go out of your way to get drives from different manufacturing batches. But if you misbalance the amount of work the drives get they accumulate wear and tear at different rates and spend different amounts of time in idle, leading them to fail at wildly different times, giving you ample time to rebuild the raid.

I'd still recommend anyone to have two parity drives (which unraid does support)

replies(1): >>44654403 #
11. eddythompson80 ◴[] No.44653579{3}[source]
You need to run frequent scrubs on the whole zfs array as well.

On unraid/snapraid you need to spin 2 drives up (one of then is always the parity)

On zfs, you are always spinnin up multiple drives too. Sure the "parity" isn't always the same drives or at least it's up to zfs to figure that out.

Nonetheless, this is all not really likely to have a significant impact. Spinning disks failure rates don't exactly correlate with their utilization[1][2]. Between SSD cache, ZFS scrubs, general usage, I don't think the parity drives are necessarily more at risk. This is anectodal, but when I ran an unRAID box for few years myself, I only had 1 failure and it was a non-parity drive.

[1] Google study from 2007 for harddrive failure rates: https://static.googleusercontent.com/media/research.google.c...

[2] "Utilization" in the paper is defined as:

       The literature generally refers to utilization metrics by employing the term duty cycle which unfortunately has no consistent and precise definition, but can be roughly characterized as the fraction of time a drive is active out of the total powered-on time. What is widely reported in the literature is that higher duty cycles affect disk drives negatively
12. Dylan16807 ◴[] No.44653671{3}[source]
> due to uneven load balancing, the parity drive is going to fail more often than in a configuration with distributed parity

Good, it can be the canary.

> thus you are going to need to recalculate parity for the array more often, which is a risky and taxing operation for all drives in the array

This is not worth worrying about.

First off, if the risk is linear then your increased parity failure is offset by decreased other-drive failure and I don't think you'll have more rebuilds.

And even if you do get more rebuilds, it's significantly less than one per year, and one extra full-drive read per year is a negligible amount of load. If you're worried about it all hitting at once then A) you should be scrubbing more often and B) throttle the rebuild.

13. riddley ◴[] No.44654403[source]
I often see these discussions and "drive failure" is often mentioned and I wish the phrase was instead "unrecoverable read error" because that's the more accurate phrase. To me, "drive failure" conjures ideas of completely failed devices. An unrecoverable read error can and does happen on our bigger and bigger drives with regularity and will stop most raid rebuilds in their tracks.
replies(1): >>44657419 #
14. Dylan16807 ◴[] No.44655367{3}[source]
> Great, now I have an SSD that is treated as a consummative and will die and need to be replaced.

It's only consumable if you hit the write limit. Hard drive arrays are usually not intended for tons of writes. SSDs $100 or less go up to at least 2000 terabytes written (WD Red SN700). How many hundreds of gigabytes of churn do you need per day?

15. wongarsu ◴[] No.44657419{3}[source]
"unrecoverable read error" or "defects" is probably a better framing because it highlights the need to run regular scrubs of your RAID. If you don't search for errors but just wait until the disk no longer powers on you might find out that by then you have more errors than your RAID configuration can recover from
16. 42lux ◴[] No.44657437{3}[source]
Nobody has to rebuild after a cache drive failure. The data you would lose is the non moved data on the cache drive. You are really overthinking this with former knowledge that leads you to assumptions that are just not true.