The filesystem should do files, if you want something more complex do it in userspace. We even have FUSE if you want to use the Filesystem API with your crazy network database thing.
The filesystem should do files, if you want something more complex do it in userspace. We even have FUSE if you want to use the Filesystem API with your crazy network database thing.
That's pretty much built into most mass storage devices already.
> If a disk bitflips one of my files
The likelihood and consequence of this occurring is in many situations not worth the overhead of adding additional ECC on top of what the drive does.
> ext* won't do anything about it.
What should it do? Blindly hand you the data without any indication that there's a problem with the underlying block? Without an fsck what mechanism do you suppose would manage these errors as they're discovered?
> That's pretty much built into most mass storage devices already.
And ZFS has shown that it is not sufficient (at least for some use-cases, perhaps less of a big deal for 'residential' users).
> The likelihood and consequence of this occurring is in many situations not worth the overhead of adding additional ECC on top of what the drive does.
Not worth it to whom? Not having the option available at all is the problem. I can do a zfs set checksum=off pool_name/dataset_name if I really want that extra couple percentage points of performance.
> Without an fsck what mechanism do you suppose would manage these errors as they're discovered?
Depends on the data involved: if it's part of the file system tree metadata there are often multiple copies even for a single disk on ZFS. So instead of the kernel consuming corrupted data and potentially panicing (or going off into the weeds) it can find a correct copy elsewhere.
If you're in a fancier configuration with some level of RAID, then there could be other copies of the data, or it could be rebuilt through ECC.
With ext*, LVM, and mdadm no such possibility exists because there are no checksums at any of those layers (perhaps if you glom on dm-integrity?).
And with ZFS one can set copies=2 on a per-dataset basis (perhaps just for /home?), and get multiple copies strewn across the disk: won't save you from a drive dying, but could save you from corruption.
Which implies you can already correct errors through a simple majority mechanism.
> or it could be rebuilt through ECC.
So just by having the appropriate level of RAID you automatically solve the problem. Why is this in the fs layer then?
I don't think so? You set copies=2, and the disk says that your file starts with 01010101, except that the second copy says your file starts with 01010100. How do you tell which one is right? For that matter, even with only one copy ex. ZFS can tell that what it has is wrong even if it can't fix it, and flagging the error is still useful.
> So just by having the appropriate level of RAID you automatically solve the problem. Why is this in the fs layer then?
Similarly, you shouldn't need RAID to catch problems, only (potentially) to correct them. I do agree that it doesn't necessarily have to be in the FS layer, but AFAIK Linux doesn't have any other layers that do a good job of it (as mentioned above, dm-integrity exists but halving the write speed is a pretty big problem).
The disk is going to report an uncorrected error for one of them.
Define "fs layer". ZFS has multiple layers with-in it:
The "file system" that most people interact with (for things like homedirs) is actually a layer with-in ZFS' architecture, and is called the ZFS POSIX layer (ZPL). It exposes a POSIX file system, and take the 'tradition' Unix calls and creates objects. Those objects are passed to the Data Management Unit (DMU), which then passed them down to Storage Pool Allocator (SPA) layer which actually manages the striping, redundancy, etc.
* https://ibug.io/blog/2023/10/zfs-block-size/
There was a bit of a 'joke' back in the day about ZFS being a "layering violation" because it subsumed into itself RAID, volume management, and a file system, instead of having each in a separate software packages:
* https://web.archive.org/web/20070508214221/https://blogs.sun...
* https://lildude.co.uk/zfs-rampant-layering-violation
The ZPL is not used all the time: one can create a block device ("zvol") and put swap or iSCSI on it. The Lustre folks have their own layer that hooks into the DMU and doesn't bother with POSIX semantics:
* https://wiki.lustre.org/ZFS_OSD_Hardware_Considerations
* https://www.eofs.eu/wp-content/uploads/2024/02/21_andreas_di...
Emperical evidence has shown otherwise: I have regularly gotten checksum error reports that ZFS has complained about during a scrub.
The ZFS developers have said in interviews that disks, when asked from LBA 123 have returned the contents of LBA 234 (due to disk firmware bugs): the on-disk checksum for 234 is correct, and so the bits were passed up the stack, but that's not the data that the kernel/ZFS asked for. It is only be verifying at the file system layer than the problem was caught (because at the disk layer things were "fine").
A famous paper that used Google's large quantity of drives as a 'sample population' mentions file system-level checks:
* https://www.cs.toronto.edu/~bianca/papers/fast08.pdf
See also the Google File System paper (§5.2 Data Integrity):
* https://research.google/pubs/the-google-file-system/
Trusting drives is not wise.