←back to thread

65 points qvr | 2 comments | | HN request time: 0s | source
Show context
nullc ◴[] No.44653420[source]
Are any filesystems offering file level FEC yet?

If a file has a hundred thousand blocks you could tack on a thousands blocks of error correction for the cost of making it just 1% larger. If the file is a seldom/never written archive it's essentially free beyond the space it takes up.

The kind of massive data archives that you want to minimize storage costs of tend to be read-mostly affairs.

It won't save you from a disk failure but I see bad blocks much more often than whole disk failures these days... and raid/5/6 have rather high costs while being still quite vulnerable to the possibility of an aligned fault on multiple disks.

Of course you could use par or similar tools, but that lacks nice FS transparent integration and particularly doesn't benefit from checksums already implemented in (some) FS (as you need half the error correction data to recover from known-position errors, and-or can use erasure only codes).

replies(5): >>44653469 #>>44653485 #>>44654527 #>>44655069 #>>44655131 #
1. Dylan16807 ◴[] No.44655069[source]
I think the closest you're going to get is splitting the drive into 20 partitions and running RAIDZ across them.
replies(1): >>44655593 #
2. nullc ◴[] No.44655593[source]
yeesh, that would have pretty poor performance and non-trivial overhead compared to the protection level against bad blocks.