←back to thread

215 points ksec | 1 comments | | HN request time: 0.199s | source
Show context
LeoPanthera ◴[] No.45076431[source]
It's orphaned in Debian as well, but I'm not sure what significant advantages it has over btrfs, which is very stable these days.
replies(1): >>45076586 #
betaby ◴[] No.45076586[source]
btrfs was unusable in multi disk setup for kernels 6.1 and older. Didn't try since then. How's stable btrs today in such setups?

Also see https://www.phoronix.com/news/Josef-Bacik-Leaves-Meta

replies(5): >>45076637 #>>45076834 #>>45076978 #>>45076998 #>>45081574 #
cmurf ◴[] No.45076834[source]
Absurd to claim it’s unusable without any qualification whatsoever.

Single, dup, raid0, raid1, raid10 have been usable and stable for a decade or more.

replies(1): >>45081435 #
bigstrat2003 ◴[] No.45081435[source]
I lost my BTRFS RAID-1 array a year or two ago when one of my drives went offline. Just poof, data gone and I had to rebuild. I am not saying that it happens all the time, but I wouldn't say it's completely bulletproof either.
replies(2): >>45082950 #>>45084864 #
1. cmurf ◴[] No.45084864[source]
There's 100s of possible explanations but, without any evidence at all, you've selected one. It's simply not a compelling story.

Disaster recovery isn't obvious on any setup I've worked with. I have to RTFM to understand each system's idiosyncrasies.

The idea that some filesystems have no bugs is absurd. The idea that filesystems can mitigate all bugs in drive firmware or elsewhere in the storage stack is also absurd.

My anecdota: hundreds of intentional sabotaging of Btrfs while writing, single drive and raid1 configurations, and physically disconnecting a drive. Not one time have I encountered an inconsistent filesystem or data loss once it was on stable media. Not one. It always mounted without needing a filesystem check. This is on consumer hardware.

There's always some data loss in the single drive case no matter the filesystem. Some of the data or fs metadata isn't yet persistent. Raid1 helps with that, because so long as the hardware problem that affected the 1st drive is isolated, the data is written to stable media.

Of course, I'm no more a scientific sample than you are. And also my tests are rudimentary compared to the many thousands of synthetic tests fstests performs on Linux filesystems, both generic and fs specific, every cycle. But it is a real world test, and suggests no per se problem that inevitably means data loss as you describe.