←back to thread

215 points ksec | 1 comments | | HN request time: 0.334s | source
Show context
betaby ◴[] No.45076609[source]
The sad part, that despite the years of the development BTRS never reached the parity with ZFS. And yesterday's news "Josef Bacik who is a long-time Btrfs developer and active co-maintainer alongside David Sterba is leaving Meta. Additionally, he's also stepping back from Linux kernel development as his primary job." see https://www.phoronix.com/news/Josef-Bacik-Leaves-Meta

There is no 'modern' ZFS-like fs in Linux nowadays.

replies(4): >>45076793 #>>45076833 #>>45078150 #>>45080011 #
ibgeek ◴[] No.45076793[source]
This isn’t BTRFS
replies(3): >>45076826 #>>45076870 #>>45077235 #
doubletwoyou ◴[] No.45076870[source]
This might not be directly about btrfs but bcachefs zfs and btrfs are the only filesystems for Linux that provide modern features like transparent compression, snapshots, and CoW.

zfs is out of tree leaving it as an unviable option for many people. This news means that bcachefs is going to be in a very weird state in-kernel, which leaves only btrfs as the only other in-tree ‘modern’ filesystem.

This news about bcachefs has ramifications about the state of ‘modern’ FSes in Linux, and I’d say this news about the btrfs maintainer taking a step back is related to this.

replies(1): >>45076955 #
ajross ◴[] No.45076955[source]
Meh. This war was stale like nine years ago. At this point the originally-beaten horse has decomposed into soil. My general reply to this is:

1. The dm layer gives you cow/snapshots for any filesystem you want already and has for more than a decade. Some implementations actually use it for clever trickery like updates, even. Anyone who has software requirements in this space (as distinct from "wants to yell on the internet about it") is very well served.

2. Compression seems silly in the modern world. Virtually everything is already compressed. To first approximation, every byte in persistent storage anywhere in the world is in a lossy media format. And the ones that aren't are in some other cooked format. The only workloads where you see significant use of losslessly-compressible data are in situations (databases) where you have app-managed storage performance (and who see little value from filesystem choice) or ones (software building, data science, ML training) where there's lots of ephemeral intermediate files being produced. And again those are usages where fancy filesystems are poorly deployed, you're going to throw it all away within hours to days anyway.

Filesystems are a solved problem. If ZFS disappeared from the world today... really who would even care? Only those of us still around trying to shout on the internet.

replies(8): >>45076983 #>>45077056 #>>45077104 #>>45077510 #>>45077740 #>>45077819 #>>45078472 #>>45080577 #
anon-3988 ◴[] No.45076983[source]
> Filesystems are a solved problem. If ZFS disappeared from the world today... really who would even care? Only those of us still around trying to shout on the internet.

Yeah nah, have you tried processing terabytes of data every day and storing them? It gets better now with DDR5 but bit flips do actually happen.

replies(3): >>45077066 #>>45077162 #>>45077439 #
bombcar ◴[] No.45077162[source]
Bit flips can happen, and if it’s a problem you should have additional verification above the filesystem layer, even if using ZFS.

And maybe below it.

And backups.

Backups make a lot of this minor.

replies(1): >>45077286 #
toast0 ◴[] No.45077286[source]
Backups are great, but don't help much if you backup corrupted data.

You can certainly add verification above and below your filesystem, but the filesystem seems like a good layer to have verification. Capturing a checksum while writing and verifying it while reading seems appropriate; zfs scrub is a convenient way to check everything on a regular basis. Personally, my data feels important enough to make that level of effort, but not important enough to do anything else.

replies(1): >>45077563 #
ajross ◴[] No.45077563[source]
FWIW, framed the way you do, I'd say the block device layer would be an *even better* place for that validation, no?

> Personally, my data feels important enough to make that level of effort, but not important enough to do anything else.

OMG. Backups! You need backups! Worry about polishing your geek cred once your data is on physically separate storage. Seriously, this is not a technology choice problem. Go to Amazon and buy an exfat stick, whatever. By far the most important thing you're ever going to do for your data is Back. It. Up.

Filesystem choice is, and I repeat, very much a yell-on-the-internet kind of thing. It makes you feel smart on HN. Backups to junky Chinese flash sticks are what are going to save you from losing data.

replies(2): >>45077728 #>>45078612 #
toast0 ◴[] No.45078612[source]
I apprechiate the argument. I do have backups. Zfs makes it easy to send snapshots and so I do.

But I don't usually verify the backups, so there's that. And everything is in the same zip code for the most part, so one big disaster and I'll lose everything. C'est la vie.

replies(1): >>45082193 #
petre ◴[] No.45082193[source]
What good is a backup if you can't restore it?
replies(1): >>45083405 #
1. toast0 ◴[] No.45083405[source]
Well, I expect that I can restore it, and that expectation has been good enough thus far. :p