Most active commenters
  • anonnon(4)

←back to thread

144 points ksec | 26 comments | | HN request time: 0.634s | source | bottom
Show context
msgodel ◴[] No.44466535[source]
The older I get the more I feel like anything other than the ExtantFS family is just silly.

The filesystem should do files, if you want something more complex do it in userspace. We even have FUSE if you want to use the Filesystem API with your crazy network database thing.

replies(3): >>44466685 #>>44466895 #>>44467306 #
1. anonnon ◴[] No.44466685[source]
> The older I get the more I feel like anything other than the ExtantFS family is just silly.

The extended (not extant) family (including ext4) don't support copy-on-write. Using them as your primary FS after 2020 (or even 2010) is like using a non-journaling file system after 2010 (or even 2001)--it's a non-negotiable feature at this point. Btrfs has been stable for a decade, and if you don't like or trust it, there's always ZFS, which has been stable 20 years now. Apple now has AppFS, with CoW, on all their devices, while MSFT still treats ReFS as unstable, and Windows servers still rely heavily on NTFS.

replies(7): >>44466706 #>>44466709 #>>44466817 #>>44467125 #>>44467236 #>>44467926 #>>44468462 #
2. msgodel ◴[] No.44466706[source]
Again I don't really want the kernel managing a database for me like that, the few applications that need that can do it themselves just fine. (IME mostly just RDBMSs and Qemu.)
3. robotnikman ◴[] No.44466709[source]
>Windows will at some point have ReFS

They seem to be slowly introducing it to the masses, Dev drives you set up on Windows automatically use ReFS

4. milkey_mouse ◴[] No.44466817[source]
Hell, there's XFS if you love stability but want CoW.
replies(1): >>44466882 #
5. josephcsible ◴[] No.44466882[source]
XFS doesn't support whole-volume snapshots, which is the main reason I want CoW filesystems. And it also stands out as being basically the only filesystem that you can't arbitrarily shrink without needing to wipe and reformat.
replies(4): >>44467133 #>>44467505 #>>44468243 #>>44470078 #
6. leogao ◴[] No.44467125[source]
btrfs has eaten my data within the last decade. (not even because of the broken erasure coding, which I was careful to avoid!) not sure I'm willing to give it another chance. I'd much rather use zfs.
replies(1): >>44468255 #
7. leogao ◴[] No.44467133{3}[source]
you can always have an LVM layer for atomic snapshots
replies(2): >>44467244 #>>44476293 #
8. NewJazz ◴[] No.44467236[source]
CoW is an efficiency gain. Does it do anything to ensure data integrity, like journaling does? I think it is an unreasonable comparison you are making.
replies(4): >>44467447 #>>44467606 #>>44467639 #>>44474557 #
9. josephcsible ◴[] No.44467244{4}[source]
There are advantages to having the filesystem do the snapshots itself. For example, if you have a really big file that you keep deleting and restoring from a snapshot, you'll only pay the cost of the space once with Btrfs, but will pay it every time over with LVM.
10. webstrand ◴[] No.44467447[source]
I use CoW a lot just managing files. It's only an efficiency gain if you have enough space to do the data-copying operation. And that's not necessarily true in all cases.

Being able to quickly take a "backup" copy of some multi-gb directory tree before performing some potentially destructive operation on it is such a nice safety net to have.

It's also a handy way to backup file metadata, like mtime, without having to design a file format for mapping saved mtimes back to their host files.

11. kzrdude ◴[] No.44467505{3}[source]
there was the "old dog new tricks" xfs talk long time ago, but I suppose it was for fun and exploration and not really a sneak peek into snapshots
12. anonnon ◴[] No.44467606[source]
> CoW is an efficiency gain.

You're thinking of the optimization technique of CoW, as in what Linux does when spawning a new thread or forking a process. I'm talking about it in the context of only ever modifying copies of file system data and metadata blocks, for the purpose of ensuring file system integrity, even in the context of sudden power loss (EDIT: wrong link): https://www.qnx.com/developers/docs/8.0/com.qnx.doc.neutrino...

If anything, ordinary file IO is likely to be slightly slower on a CoW file system, due to it always having to copy a block before said block can be modified and updating block pointers.

13. throw0101d ◴[] No.44467639[source]
> Does it do anything to ensure data integrity, like journaling does?

What kind of journaling though? By default ext4 only uses journaling for metadata updates, not data updates (see "ordered" mode in ext4(5)).

So if you have a (e.g.) 1000MB file, and you update 200MB in the middle of it, you can have a situation where the first 100MB is written out and the system dies with the other 100MB vanishing.

With a CoW, if the second 100MB is not written out and the file sync'd, then on system recovery you're back to the original file being completely intact. With ext4 in the default configuration you have a file that has both new-100MB and stale-100MB in the middle of it.

The updating of the file data and the metadata are two separate steps (by default) in ext4:

* https://www.baeldung.com/linux/ext-journal-modes

* https://michael.kjorling.se/blog/2024/ext4-defaulting-to-dat...

* https://fy.blackhats.net.au/blog/2024-08-13-linux-filesystem...

Whereas with a proper CoW (like ZFS), updates are ACID.

replies(1): >>44474604 #
14. tbrownaw ◴[] No.44467926[source]
> The extended (not extant) family (including ext4)

I read that more as "we have filesystems at home, and also get off my lawn".

15. MertsA ◴[] No.44468243{3}[source]
You can shrink XFS, but only the realtime volume. All you need is xfs_db and a steady hand. I once had to pull this off for a shortened test program for a new server platform at Meta. Works great except some of those filesystems did somehow get this weird corruption around used space tracking that xfs_repair couldn't detect... It was mostly fine.
16. bombcar ◴[] No.44468255[source]
I used reiserfs for awhile after I noticed it eating data (tail packing for the power loss) but quickly switched to xfs when it became available.

Speed is sometimes more important than absolute reliability, but it’s still an undesirable tradeoff.

17. zahlman ◴[] No.44468462[source]
... NTFS does copy-on-write?

... It does hard links? After checking: It does hard links.

... Why didn't any programs I had noticeably take advantage of that?

replies(1): >>44468793 #
18. anonnon ◴[] No.44468793[source]
> NTFS does copy-on-write?

No, it doesn't. Maybe you're thinking of shadow volume copies or something else. CoW files systems never modify data or metadata blocks directly, only modifying copies, with the root of the updated block pointer graph only updated after all other changes have been synced. Read this: https://www.qnx.com/developers/docs/8.0/com.qnx.doc.neutrino...

replies(1): >>44469318 #
19. zahlman ◴[] No.44469318{3}[source]
>No, it doesn't. Maybe you're thinking of shadow volume copies or something else.

I was asking, because didn't know, and I thought the other person was implying that it did.

I know what copy-on-write is.

replies(1): >>44469587 #
20. anonnon ◴[] No.44469587{4}[source]
The "other person" (only mention of NTFS) is me, here:

> while MSFT still treats ReFS as unstable, and Windows servers still rely heavily on NTFS.

By this I implied it's an embarrassment to MSFT that iOS devices have a better, more reliable file system (AppFS) than even Windows servers now (having to rely on NTFS until ReFS is ready for prime time). If HN users and mods didn't tone-police so heavily, I could state things more frankly.

21. adrian_b ◴[] No.44470078{3}[source]
Many years ago, XFS did not support snapshots.

However, there is also a long time since XFS supports snapshots.

See for example:

https://thelinuxcode.com/xfs-snapshot/

I am not sure what you mean by "whole-volume" snapshots, but I have not noticed any restrictions in the use of the XFS snapshots. As expected, they store a snapshot of the entire file system, which can be restored later.

In many decades of managing computers with all kinds of operating systems and file systems, on a variety of servers and personal computers, I have never had the need to shrink a file system. I cannot imagine how such a need can arise, except perhaps as a consequence of bad planning. There are also many decades since I have deprecated the use of multiple partitions on a storage device, with the exception of bootable devices, which must have a dedicated partition for booting, conforming to the BIOS or UEFI expectations. For anything that was done in the ancient times with multiple partitions there are better alternatives now. With the exception of bootable USB sticks with live Linux or FreeBSD partitions, I use XFS on whole SSDs or HDDs (i.e. unpartitioned), regardless if they are internal or external, so there is never any need for changing the size of the file system.

Even so, copying a file system to an external device, reformatting the device and copying the file system back is not likely to be significantly slower than shrinking in place. In fact sometimes it can be faster and it has the additional benefit that the new copy of the file system will be defragmented.

Much more significant than the lack of shrinking ability, which may slow down a little something that occurs very seldom, is that both EXT4 and XFS are much faster for most applications than the other file systems available for Linux, so they are fast for the frequent operations. You may choose another file system for other reasons, but choosing it for making faster a very rare operation like shrinking is a very weak reason.

replies(1): >>44470789 #
22. CoolCold ◴[] No.44470789{4}[source]
I definitely met several cases where support for shrinking would be beneficial - usually something about migrations and things like that, but yet I agree it's quite rare operation. Benefits come with lower amount of downtime window and/or expenses in time and duplicating systems.

I.e. back in ~ 2013-2014 while moving some baremetal Windows server into VMware, srhinking and then optimizing MFT helped to save AFAIR 2 hours of downtime window.

> except perhaps as a consequence of bad planning

Assuming people go to Clouds instead of physical servers because they may need to add 100 more nodes "suddenly" - selling point of Clouds is "avoid planning" - one may expect cases of need of shrinking are rising, now lowing. It may be mitigated by different approaches of course - i.e. often it's easier to resetup VM, but yet.

replies(1): >>44472188 #
23. adrian_b ◴[] No.44472188{5}[source]
I do not see the connection between shrinking and migrations.

In migrations you normally copy the file system elsewhere, to the cloud or to different computers, you do not shrink it in place, which is what XFS cannot do. Unlike with Windows, copying Linux file systems, including XFS, during migrations to different hardware is trivial and fast. The same is true for multiplicating a file system to a big set of computers.

Shrinking in place is normally needed only when you share a physical device between 2 different operating systems, which use incompatible file systems, e.g. Windows and Linux, and you discover that you did not partition well the physical device and you want to shrink the partition allocated for one of the operating systems, in order to be able to expand the partition allocated for the other operating system.

Sharing physical devices between Windows and any other operating systems comes with a lot of risks and disadvantages, so I strongly recommend against it. I have stopped sharing Windows disks decades ago. Now, if I want to use the same computer in Windows and in another operating system, e.g. Linux or FreeBSD, I install Windows on the internal SSD, and, when desired, I boot Linux or FreeBSD from an external SSD. Thus the problem of reallocating a shared SSD/HDD by shrinking a partition never arises.

24. ryao ◴[] No.44474557[source]
In what way do you consider CoW to be an efficiency gain? Traditionally, it is considered more expensive due to write amplification. In place filesystems such as XFS tend to be more efficient in terms of IOPs and CoW filesystems need to do many tricks to be close to them.

As for ensuring data integrity, I cannot talk about other CoW filesystems, but ZFS has an atomic transaction commit that relies on CoW. In ZFS, your changes either happened or they did not happen. The entire file system is a giant merkle tree and every change requires that all nodes of the tree up to the root be rewritten. To minimize the penalty of CoW, these changes are aggregated into transaction groups that are then committed atomically. Thus, you simultaneously have both the old and new versions available, plus possible more than just 1 old version. ZFS will start recycling space after a couple transaction group commits, but often, you can go further back in its history if needed after some catastrophic event, although ZFS makes no solid guarantee of this (until you fiddle with module parameter settings to prevent reclaim from being so aggressive).

If it counts for anything, I have hundreds of commits in OpenZFS, so I am fairly familiar with how ZFS works internally.

25. ryao ◴[] No.44474604{3}[source]
Large file writes are an exception in ZFS. They are broken into multiple transactions, which can go into multiple transaction groups, such that the updates are not ACID. You can see this in the code here:

https://github.com/openzfs/zfs/blob/6af8db61b1ea489ade2d5344...

Small writes on ZFS are ACID. If ZFS made large writes ACID, large writes could block the transaction group commit for arbitrarily long periods, which is why it does not. Just imagine writing a 1PB file. It would likely take a long time (days?) and it is just not reasonable to block the transaction group commit until it finishes.

That said, for your example, you will often have all of the writes go into the same transaction group commit, such that it becomes ACID, but this is not a strict guarantee. The maximum atomic write size on ZFS is 32MB, assuming alignment. If the write is not aligned to the record size, it will be smaller, as per:

https://github.com/openzfs/zfs/blob/6af8db61b1ea489ade2d5344...

26. shtripok ◴[] No.44476293{4}[source]
On some of my zfs servers, the number of snapshots (mostly periodic, rotated — hour, day, month, updates, data maintenance work) is 10-12 thousand. LVM can't do that.