Most active commenters
  • wtallis(4)
  • LeoPanthera(3)
  • AaronFriel(3)
  • williamstein(3)

←back to thread

214 points ksec | 36 comments | | HN request time: 1.866s | source | bottom
1. LeoPanthera ◴[] No.45076431[source]
It's orphaned in Debian as well, but I'm not sure what significant advantages it has over btrfs, which is very stable these days.
replies(1): >>45076586 #
2. betaby ◴[] No.45076586[source]
btrfs was unusable in multi disk setup for kernels 6.1 and older. Didn't try since then. How's stable btrs today in such setups?

Also see https://www.phoronix.com/news/Josef-Bacik-Leaves-Meta

replies(5): >>45076637 #>>45076834 #>>45076978 #>>45076998 #>>45081574 #
3. LeoPanthera ◴[] No.45076637[source]
It's sort of frustrating that this constantly comes up. It's true that btrfs does have issues with RAID-5 and RAID-6 configurations, but this is frequently used (not necessarily by you) as some kind of gotcha as to why you shouldn't use it at all. That's insane. I promise that disk spanning issues won't affect your use of it on your tiny ThinkPad SSD.

It's important to note that striping and mirroring works just fine. It's only the 5/6 modes that are unstable: https://btrfs.readthedocs.io/en/stable/Status.html#block-gro...

replies(5): >>45076707 #>>45076727 #>>45076740 #>>45076809 #>>45077208 #
4. rendaw ◴[] No.45076707{3}[source]
> on your tiny ThinkPad SSD

Ad hominem. My thinkpad ssd is massive.

replies(1): >>45076790 #
5. AaronFriel ◴[] No.45076727{3}[source]
Respectfully to the maintainers:

How can this be a stable filesystem if parity is unstable and risks data loss?

How has this been allowed to happen?

It just seems so profoundly unserious to me.

replies(1): >>45077432 #
6. betaby ◴[] No.45076740{3}[source]
But RAID-6 is the closest approximation to raid-z2 from ZFS! And raid-z2 is stable for a decade+. Indeed btrfs works just fine on my laptop. My point is that Linux lacks ZFS-like fs for large multi disc setups.
replies(1): >>45076800 #
7. LeoPanthera ◴[] No.45076790{4}[source]
Good news, it will work just fine on that too.
8. NewJazz ◴[] No.45076800{4}[source]
Seriously for the people who take filesystems seriously and have strong preferences... Multi disk might be important.
replies(1): >>45077429 #
9. __turbobrew__ ◴[] No.45076809{3}[source]
How can I know what configurations of btrfs lose my data?

I also have had to deal with thousands of nodes kernel panicing due to a btrfs bug in linux kernel 6.8 (stable ubuntu release).

replies(2): >>45077166 #>>45080111 #
10. cmurf ◴[] No.45076834[source]
Absurd to claim it’s unusable without any qualification whatsoever.

Single, dup, raid0, raid1, raid10 have been usable and stable for a decade or more.

replies(1): >>45081435 #
11. turtletontine ◴[] No.45076978[source]
I’ve been running btrfs on a little home Debian NAS for over a year now. I have no complaints - it’s been working smoothly, doing exactly what I want. I have a heterogeneous set of probably 6 discs, >20TB total, no problems.

*caveat: I’m using RAID 10, not a parity RAID. It could have problems with parity RAID. So? If you really really want RAID 5, then just use md to make your RAID 5 device and put btrfs on top.

12. deknos ◴[] No.45076998[source]
i run btrfs on servers and desktops. it's usuable.
replies(1): >>45077119 #
13. williamstein ◴[] No.45077119{3}[source]
So do I and BTRFS is extremely good these days. It's also much faster than ZFS at mounting a disk with a large number of filesystems (=subvolumes), which is critical for building certain types of fileservers at scale. In contrast, ZFS scales horribly as the number of filesystems increases, where btrfs seems to be O(1). btrfs's quota functionality is also much better than it used to be (and very flexible), after all the work Meta put into it. Finally, having the option of easy writable snapshots is nice. BTRFS is fantastic!
replies(1): >>45078576 #
14. ffsm8 ◴[] No.45077166{4}[source]
I thought the usual recommendation was to use mdadm to build the disk pool and then use btrfs on top of that - but that might be out of date. I haven't used it in a while
replies(1): >>45084689 #
15. risho ◴[] No.45077208{3}[source]
as it turns out raid 5 and 6 being broken is kind of a big deal for people. its also far from ideal that the filesystem has random landmines that you can accidentally step on if you don't happen to read hacker news every day.
replies(1): >>45082379 #
16. wtallis ◴[] No.45077429{5}[source]
BTRFS does have stable, usable multi-disk support. The RAID 0, 1, and 10 modes are fine. I've been using BTRFS RAID1 for over a decade and across numerous disk failures. It's by far the best solution for building a durable array on my home server stuffed full of a random assortment of disks—ZFS will never have the flexibility to be useful with mismatched capacities like this. It's only the parity RAID modes that BTRFS lacks, and that's a real disadvantage but is hardly the whole story.
replies(1): >>45083148 #
17. wtallis ◴[] No.45077432{4}[source]
Does the whole filesystem need to be marked as unstable if it has a single experimental feature? Is any other filesystem held to that standard?
replies(2): >>45079309 #>>45081095 #
18. yjftsjthsd-h ◴[] No.45078576{4}[source]
> It's also much faster than ZFS at mounting a disk with a large number of filesystems (=subvolumes), which is critical for building certain types of fileservers at scale.

Now you've piqued my curiosity; what uses that many filesystems/subvolumes? (Not an attack; I believe you, I'm just trying to figure out where it comes up)

replies(2): >>45079289 #>>45079317 #
19. yencabulator ◴[] No.45079289{5}[source]
As far as I understand, a core use case at Meta was build system workers starting with prepopulated state and being able to quickly discard the working tree at the end of the build. CoW is pretty sweet for that.
20. AaronFriel ◴[] No.45079309{5}[source]
Parity support in multi-disk arrays is older than I am, it's a fairly standard feature. btrfs doesn't support this without data loss risks after 17 years of development.
replies(1): >>45080483 #
21. williamstein ◴[] No.45079317{5}[source]
It can be useful to create a file server with one filesystem/subvolume per user, because each user has their own isolated snapshots, backups via send/recv are user-specific, quotas are easier, etc. If you only have a few hundred users, ZFS is fine. But what if you have 100,000 users? Then just doing "zpool import" would take hours, whereas mounting a btrfs filesystem with 100,000 subvolumes takes a seconds. This complexity difference was a show stopper for me to architect a certain solution on top of ZFS, despite me personally loving ZFS and having used it for a long time. The btrfs commands and UX are really awkward (for me) compared to ZFS, but btrfs is extremely efficient at some things where ZFS just falls down.

The main criticism in this thread about btrfs involves multidisk setups, which aren't relevant for me, since I'm working on cloud systems and disk storage is abstracted away as a single block device.

replies(2): >>45079324 #>>45081856 #
22. williamstein ◴[] No.45079324{6}[source]
Incidentally, the application I'm reworking to use btrfs is cocalc.com. One of our main use cases is distributed assignments to students in classes, as part of the course management functionality. Imagine a class with 1500 students all getting an exact copy of a 50 MB folder, which they'll edit a little bit, and then it will be collected. The copy-on-write functionality of btrfs is fantastic for this use case (both in speed and disk usage).

Also, the out-of-band deduplication for btrfs using https://github.com/Zygo/bees is very impressive and flexible, in a way that ZFS just doesn't match.

23. mook ◴[] No.45080111{4}[source]
I thought most distros have basically disabled the footgun modes at this point; that is, using the configuration that would lose data means you'd need to work hard to get there (at which point you should have been able to see all the warnings about data loss).
replies(1): >>45084925 #
24. wtallis ◴[] No.45080483{6}[source]
If you're not interested in a multi-disk storage system that doesn't have (stable, non-experimental) parity modes, that's a valid personal preference but not at all a justification for the position that the rest of the features cannot be stable and that the project as a whole cannot be taken seriously by anyone.
replies(1): >>45084096 #
25. nextaccountic ◴[] No.45081095{5}[source]
Maybe this specific feature should be marked as unstable and default to disabled on most kernel builds unless you add something like btrfs.experimental=1 to the kernel line or something
26. bigstrat2003 ◴[] No.45081435{3}[source]
I lost my BTRFS RAID-1 array a year or two ago when one of my drives went offline. Just poof, data gone and I had to rebuild. I am not saying that it happens all the time, but I wouldn't say it's completely bulletproof either.
replies(2): >>45082950 #>>45084864 #
27. procaryote ◴[] No.45081574[source]
If you don't trust btrfs raid it's perfectly possible to run btrfs on top of lvm or mdadm raid. Then you have btrfs in a prety happy case single device mode. Also the recovery tooling is more well known and tested
28. magicalhippo ◴[] No.45081856{6}[source]
I seem to recall some discussion in one of the OpenZFS leadership meetings about slow pool imports when you have many datasets. Sadly I can't recall the details, but at least it seems to be on their radar.
29. jorams ◴[] No.45082379{4}[source]
FWIW: RAID 5 and 6 having problems is not a random hole you'll accidentally stumble into.

The man page for mkfs.btrfs says:

> Warning: RAID5/6 has known problems and should not be used in production.

When you actually tell it to use raid5 or raid6, mkfs.btrfs will also print a large warning:

> WARNING: RAID5/6 support has known problems is strongly discouraged to be used besides testing or evaluation.

30. thoroughburro ◴[] No.45082950{4}[source]
What did you try before giving up?

All the anecdotes I see tend to be “my drive didn’t mount, and I tried nothing before giving up because everyone knows BTRFS sux lol”. My professional experience meanwhile is that I’ve never once been able to not (very easily!) recover a BTRFS drive someone else has given up for dead… just by running its standard recovery tools.

31. Filligree ◴[] No.45083148{6}[source]
That’s nice and all, but I have five disks in my server. I want the 6 mode.

In practice RAIDZ2 works great.

replies(1): >>45089687 #
32. AaronFriel ◴[] No.45084096{7}[source]
Is that what I said?
33. necheffa ◴[] No.45084689{5}[source]
This is very much a big compromise where you decide for yourself that storage capacity and maybe throughput are more important than anything else.

The md metadata is not adequately protected. Btrfs checksums can tell you when a file has gone bad but not self-heal. And I'm sure there are going to be caching/perf benefits left on the table not having btrfs manage all the block storage itself.

34. cmurf ◴[] No.45084864{4}[source]
There's 100s of possible explanations but, without any evidence at all, you've selected one. It's simply not a compelling story.

Disaster recovery isn't obvious on any setup I've worked with. I have to RTFM to understand each system's idiosyncrasies.

The idea that some filesystems have no bugs is absurd. The idea that filesystems can mitigate all bugs in drive firmware or elsewhere in the storage stack is also absurd.

My anecdota: hundreds of intentional sabotaging of Btrfs while writing, single drive and raid1 configurations, and physically disconnecting a drive. Not one time have I encountered an inconsistent filesystem or data loss once it was on stable media. Not one. It always mounted without needing a filesystem check. This is on consumer hardware.

There's always some data loss in the single drive case no matter the filesystem. Some of the data or fs metadata isn't yet persistent. Raid1 helps with that, because so long as the hardware problem that affected the 1st drive is isolated, the data is written to stable media.

Of course, I'm no more a scientific sample than you are. And also my tests are rudimentary compared to the many thousands of synthetic tests fstests performs on Linux filesystems, both generic and fs specific, every cycle. But it is a real world test, and suggests no per se problem that inevitably means data loss as you describe.

35. __turbobrew__ ◴[] No.45084925{5}[source]
See the part of my comment where the btrfs kernel driver paniced on Ubuntu 24 stable kernel.

We are using a fairly simple config, but under certain heavy load patterns the kernel would panic: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

I hear people say all the time how btrfs is stable now and people are just complaining about issues when btrfs is new, but please explain to me how the bug I linked is OK in a stable version of the most popular linux distro?

36. wtallis ◴[] No.45089687{7}[source]
In the case of five disks of the same capacity, RAID6 or RAIDZ2 only gets you 20% more capacity than btrfs RAID1. That's not exactly a huge disparity, usually not enough to be a show-stopper on its own. There are plenty of scenarios where the features ZFS has which btrfs lacks are more important than the features that btrfs has which ZFS lacks. My point is simply that btrfs RAID1 has its uses and shouldn't be dismissed out of hand.