←back to thread

65 points qvr | 1 comments | | HN request time: 0.205s | source
Show context
miffe ◴[] No.44652742[source]
What makes this different from regular md? I'm not familiar with unRAID.
replies(2): >>44652869 #>>44653098 #
eddythompson80 ◴[] No.44652869[source]
unRAID is geared towards homelab style deployments. Its main advantages over typical RAID is it's flexibility (https://www.snapraid.it/compare):

- It lets you throw JBODs (of ANY size) and you can create a "RAID" over them.

- The biggest drive must be a parity drive(s).

- N parity = surviving N drive failures.

- You can expand your storage pool 1 drive at a time. You need to recalculate parity for the full array.

The actual data is spread across drives. If a drive fails, you rebuild it from the parity. This is another implementation (using MergerFS + SnapRAID) https://perfectmediaserver.com/02-tech-stack/snapraid/

It's a very simple model to think of compared to something like ZFS. You can add/remove capacity AND protection as you go.

Its perf is significantly less than ZFS of course.

replies(6): >>44653100 #>>44653205 #>>44653256 #>>44654189 #>>44654558 #>>44655212 #
phoronixrly ◴[] No.44653205[source]
I have an issue with this though... Won't you get a write on the parity drive for each write on any other drive? Doesn't seem well balanced... to be frank, looks like a good way to shoot yourself in the foot. Have a parity drive fail, then have another drive fail during the rebuild (a taxing process) and congrats -- your data is now eaten, but at least you saved a few hundred dollars by not buying drives of equal size...
replies(5): >>44653221 #>>44653334 #>>44653369 #>>44653437 #>>44653468 #
hammyhavoc ◴[] No.44653221[source]
No, because you have a cache pool and calculate the parity changes on a schedule, or when specific conditions are met, e.g., remaining available storage on the cache pool.

The cache pool is recommended to be mirrored for this reason (not many people see why I find this to be amusing).

replies(2): >>44653300 #>>44653327 #
phoronixrly ◴[] No.44653300[source]
And let me guess, the cache pool is suggested to be on an SSD?

> Increased perceived write speed: You will want a drive that is as fast as possible. For the fastest possible speed, you'll want an SSD

Great, now I have an SSD that is treated as a consummative and will die and need to be replaced. Oh and btw you are going to need two of them if you don't want to accidentally your data.

The alternative? Have the cache on a pair of spinning rust drives which will again be overloaded and are expected to fail earlier and need to be replaced while also having the benefit of being slow... But at least you won't have to go through a full rebuild after a cache drive failure.

Man, I am not sold on the cost savings of this approach at all... Let alone the complexity and moving parts that can fail...

replies(3): >>44653445 #>>44655367 #>>44657437 #
1. Dylan16807 ◴[] No.44655367[source]
> Great, now I have an SSD that is treated as a consummative and will die and need to be replaced.

It's only consumable if you hit the write limit. Hard drive arrays are usually not intended for tons of writes. SSDs $100 or less go up to at least 2000 terabytes written (WD Red SN700). How many hundreds of gigabytes of churn do you need per day?