Most active commenters
  • wtallis(4)
  • LeoPanthera(3)
  • AaronFriel(3)

←back to thread

214 points ksec | 21 comments | | HN request time: 0.002s | source | bottom
Show context
LeoPanthera ◴[] No.45076431[source]
It's orphaned in Debian as well, but I'm not sure what significant advantages it has over btrfs, which is very stable these days.
replies(1): >>45076586 #
betaby ◴[] No.45076586[source]
btrfs was unusable in multi disk setup for kernels 6.1 and older. Didn't try since then. How's stable btrs today in such setups?

Also see https://www.phoronix.com/news/Josef-Bacik-Leaves-Meta

replies(5): >>45076637 #>>45076834 #>>45076978 #>>45076998 #>>45081574 #
1. LeoPanthera ◴[] No.45076637[source]
It's sort of frustrating that this constantly comes up. It's true that btrfs does have issues with RAID-5 and RAID-6 configurations, but this is frequently used (not necessarily by you) as some kind of gotcha as to why you shouldn't use it at all. That's insane. I promise that disk spanning issues won't affect your use of it on your tiny ThinkPad SSD.

It's important to note that striping and mirroring works just fine. It's only the 5/6 modes that are unstable: https://btrfs.readthedocs.io/en/stable/Status.html#block-gro...

replies(5): >>45076707 #>>45076727 #>>45076740 #>>45076809 #>>45077208 #
2. rendaw ◴[] No.45076707[source]
> on your tiny ThinkPad SSD

Ad hominem. My thinkpad ssd is massive.

replies(1): >>45076790 #
3. AaronFriel ◴[] No.45076727[source]
Respectfully to the maintainers:

How can this be a stable filesystem if parity is unstable and risks data loss?

How has this been allowed to happen?

It just seems so profoundly unserious to me.

replies(1): >>45077432 #
4. betaby ◴[] No.45076740[source]
But RAID-6 is the closest approximation to raid-z2 from ZFS! And raid-z2 is stable for a decade+. Indeed btrfs works just fine on my laptop. My point is that Linux lacks ZFS-like fs for large multi disc setups.
replies(1): >>45076800 #
5. LeoPanthera ◴[] No.45076790[source]
Good news, it will work just fine on that too.
6. NewJazz ◴[] No.45076800[source]
Seriously for the people who take filesystems seriously and have strong preferences... Multi disk might be important.
replies(1): >>45077429 #
7. __turbobrew__ ◴[] No.45076809[source]
How can I know what configurations of btrfs lose my data?

I also have had to deal with thousands of nodes kernel panicing due to a btrfs bug in linux kernel 6.8 (stable ubuntu release).

replies(2): >>45077166 #>>45080111 #
8. ffsm8 ◴[] No.45077166[source]
I thought the usual recommendation was to use mdadm to build the disk pool and then use btrfs on top of that - but that might be out of date. I haven't used it in a while
replies(1): >>45084689 #
9. risho ◴[] No.45077208[source]
as it turns out raid 5 and 6 being broken is kind of a big deal for people. its also far from ideal that the filesystem has random landmines that you can accidentally step on if you don't happen to read hacker news every day.
replies(1): >>45082379 #
10. wtallis ◴[] No.45077429{3}[source]
BTRFS does have stable, usable multi-disk support. The RAID 0, 1, and 10 modes are fine. I've been using BTRFS RAID1 for over a decade and across numerous disk failures. It's by far the best solution for building a durable array on my home server stuffed full of a random assortment of disks—ZFS will never have the flexibility to be useful with mismatched capacities like this. It's only the parity RAID modes that BTRFS lacks, and that's a real disadvantage but is hardly the whole story.
replies(1): >>45083148 #
11. wtallis ◴[] No.45077432[source]
Does the whole filesystem need to be marked as unstable if it has a single experimental feature? Is any other filesystem held to that standard?
replies(2): >>45079309 #>>45081095 #
12. AaronFriel ◴[] No.45079309{3}[source]
Parity support in multi-disk arrays is older than I am, it's a fairly standard feature. btrfs doesn't support this without data loss risks after 17 years of development.
replies(1): >>45080483 #
13. mook ◴[] No.45080111[source]
I thought most distros have basically disabled the footgun modes at this point; that is, using the configuration that would lose data means you'd need to work hard to get there (at which point you should have been able to see all the warnings about data loss).
replies(1): >>45084925 #
14. wtallis ◴[] No.45080483{4}[source]
If you're not interested in a multi-disk storage system that doesn't have (stable, non-experimental) parity modes, that's a valid personal preference but not at all a justification for the position that the rest of the features cannot be stable and that the project as a whole cannot be taken seriously by anyone.
replies(1): >>45084096 #
15. nextaccountic ◴[] No.45081095{3}[source]
Maybe this specific feature should be marked as unstable and default to disabled on most kernel builds unless you add something like btrfs.experimental=1 to the kernel line or something
16. jorams ◴[] No.45082379[source]
FWIW: RAID 5 and 6 having problems is not a random hole you'll accidentally stumble into.

The man page for mkfs.btrfs says:

> Warning: RAID5/6 has known problems and should not be used in production.

When you actually tell it to use raid5 or raid6, mkfs.btrfs will also print a large warning:

> WARNING: RAID5/6 support has known problems is strongly discouraged to be used besides testing or evaluation.

17. Filligree ◴[] No.45083148{4}[source]
That’s nice and all, but I have five disks in my server. I want the 6 mode.

In practice RAIDZ2 works great.

replies(1): >>45089687 #
18. AaronFriel ◴[] No.45084096{5}[source]
Is that what I said?
19. necheffa ◴[] No.45084689{3}[source]
This is very much a big compromise where you decide for yourself that storage capacity and maybe throughput are more important than anything else.

The md metadata is not adequately protected. Btrfs checksums can tell you when a file has gone bad but not self-heal. And I'm sure there are going to be caching/perf benefits left on the table not having btrfs manage all the block storage itself.

20. __turbobrew__ ◴[] No.45084925{3}[source]
See the part of my comment where the btrfs kernel driver paniced on Ubuntu 24 stable kernel.

We are using a fairly simple config, but under certain heavy load patterns the kernel would panic: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

I hear people say all the time how btrfs is stable now and people are just complaining about issues when btrfs is new, but please explain to me how the bug I linked is OK in a stable version of the most popular linux distro?

21. wtallis ◴[] No.45089687{5}[source]
In the case of five disks of the same capacity, RAID6 or RAIDZ2 only gets you 20% more capacity than btrfs RAID1. That's not exactly a huge disparity, usually not enough to be a show-stopper on its own. There are plenty of scenarios where the features ZFS has which btrfs lacks are more important than the features that btrfs has which ZFS lacks. My point is simply that btrfs RAID1 has its uses and shouldn't be dismissed out of hand.