←back to thread

215 points ksec | 1 comments | | HN request time: 0.207s | source
Show context
betaby ◴[] No.45076609[source]
The sad part, that despite the years of the development BTRS never reached the parity with ZFS. And yesterday's news "Josef Bacik who is a long-time Btrfs developer and active co-maintainer alongside David Sterba is leaving Meta. Additionally, he's also stepping back from Linux kernel development as his primary job." see https://www.phoronix.com/news/Josef-Bacik-Leaves-Meta

There is no 'modern' ZFS-like fs in Linux nowadays.

replies(4): >>45076793 #>>45076833 #>>45078150 #>>45080011 #
ibgeek ◴[] No.45076793[source]
This isn’t BTRFS
replies(3): >>45076826 #>>45076870 #>>45077235 #
doubletwoyou ◴[] No.45076870[source]
This might not be directly about btrfs but bcachefs zfs and btrfs are the only filesystems for Linux that provide modern features like transparent compression, snapshots, and CoW.

zfs is out of tree leaving it as an unviable option for many people. This news means that bcachefs is going to be in a very weird state in-kernel, which leaves only btrfs as the only other in-tree ‘modern’ filesystem.

This news about bcachefs has ramifications about the state of ‘modern’ FSes in Linux, and I’d say this news about the btrfs maintainer taking a step back is related to this.

replies(1): >>45076955 #
ajross ◴[] No.45076955[source]
Meh. This war was stale like nine years ago. At this point the originally-beaten horse has decomposed into soil. My general reply to this is:

1. The dm layer gives you cow/snapshots for any filesystem you want already and has for more than a decade. Some implementations actually use it for clever trickery like updates, even. Anyone who has software requirements in this space (as distinct from "wants to yell on the internet about it") is very well served.

2. Compression seems silly in the modern world. Virtually everything is already compressed. To first approximation, every byte in persistent storage anywhere in the world is in a lossy media format. And the ones that aren't are in some other cooked format. The only workloads where you see significant use of losslessly-compressible data are in situations (databases) where you have app-managed storage performance (and who see little value from filesystem choice) or ones (software building, data science, ML training) where there's lots of ephemeral intermediate files being produced. And again those are usages where fancy filesystems are poorly deployed, you're going to throw it all away within hours to days anyway.

Filesystems are a solved problem. If ZFS disappeared from the world today... really who would even care? Only those of us still around trying to shout on the internet.

replies(8): >>45076983 #>>45077056 #>>45077104 #>>45077510 #>>45077740 #>>45077819 #>>45078472 #>>45080577 #
yjftsjthsd-h ◴[] No.45078472[source]
> Compression seems silly in the modern world. Virtually everything is already compressed.

IIRC my laptop's zpool has a 1.2x compression ratio; it's worth doing. At a previous job, we had over a petabyte of postgres on ZFS and saved real money with compression. Hilariously, on some servers we also improved performance because ZFS could decompress reads faster than the disk could read.

replies(3): >>45080507 #>>45080922 #>>45081733 #
pezezin ◴[] No.45080922[source]
How do you get a PostgreSQL database to grow to one petabyte? The maximum table size is 32 TB o_O
replies(2): >>45081490 #>>45083233 #
1. yjftsjthsd-h ◴[] No.45083233[source]
Cumulative; dozens of machines with a combined database size over a PB even though each box only had like 20 TB.