←back to thread

423 points speckx | 2 comments | | HN request time: 0.533s | source
Show context
dylan604 ◴[] No.44533476[source]
SSD speeds are nothing short of miraculous in my mind. I come from the old days of striping 16 HDDs together (at a minimum number) to get 1GB/s throughput. Depending on the chassis, that was 2 8-drive enclosures in the "desktop" version or the large 4RU enclosures with redundant PSUs and fans loud enough to overpower arena rock concerts. Now, we can get 5+GB/s throughput from a tiny stick that can be used externally via a single cable for data&power that is absolutely silent. I edit 4K+ video as well, and now can edit directly from the same device the camera recorded to during production. I'm skipping over the parts of still making backups, but there's no more multi-hour copy from source media to edit media during a DIT step. I've spent many a shoot as a DIT wishing the 1s&0s would travel across devices much faster while everyone else on the production has already left, so this is much appreciated by me. Oh, and those 16 device units only came close to 4TB around the time of me finally dropping spinning rust.

The first enclosure I ever dealt with was a 7-bay RAID-0 that could just barely handle AVR75 encoding from Avid. Just barely to the point that only video was saved to the array. The audio throughput would put it over the top, so audio was saved to a separate external drive.

Using SSD feels like a well deserved power up from those days.

replies(8): >>44533735 #>>44534375 #>>44535266 #>>44535471 #>>44536311 #>>44536501 #>>44539458 #>>44539872 #
gchamonlive ◴[] No.44533735[source]
> I come from the old days of striping 16 HDDs together (at a minimum number) to get 1GB/s throughput

Woah, how long would that last before you'd start having to replace the drives?

replies(3): >>44533870 #>>44534101 #>>44538566 #
adastra22 ◴[] No.44538566[source]
I run 24x RAID at home. I’m replacing disks 2-3 times per year.
replies(1): >>44538944 #
dylan604 ◴[] No.44538944[source]
Are your drives under heavy load or primarily just spinning waiting for use? Are they dying unsuspectedly, or are you watching the SMART messages and being prepared when it happens?
replies(1): >>44541085 #
adastra22 ◴[] No.44541085[source]
They’re idle most of the time. Poweted on 24/7 though, and maybe a few hundred megabytes written every day, plus a few dozen gigabytes now and then. Mostly long-term storage. SMART has too much noise; I wait for zfs to kick it out of the pool before changing. With triple redundancy, never got close to data loss.

To be clear, I should have said replacing 2-3 disks per year.

replies(1): >>44542869 #
1. somehnguy ◴[] No.44542869[source]
That seems awfully high no? I've been running a 5 disk raidz2 pool (3TB disks) and haven't replaced a single drive in the last 6ish years. It's composed of only used/decommissioned drives from ebay. The manufactured date stamp on most of them says 2014.

I did have a period where I thought drives were failing but further investigation revealed that ZFS just didn't like the drives spinning down for power save and would mark them as failed. I don't remember the parameter but essentially just forced the drives to spin 24/7 instead of spinning down when idle and it's been fine ever since. My health monitoring script scrubs the array weekly.

replies(1): >>44544197 #
2. adastra22 ◴[] No.44544197[source]
Drives I RMA have actual bad sectors. You have a good batch. These drives tend to either last 10+ years, or fail in 1-3 years, and there is a clear bimodal distribution. I think about half the drives in my array are original too.