←back to thread

621 points sebg | 3 comments | | HN request time: 0.64s | source
Show context
randomtoast ◴[] No.43717002[source]
Why not use CephFS instead? It has been thoroughly tested in real-world scenarios and has demonstrated reliability even at petabyte scale. As an open-source solution, it can run on the fastest NVMe storage, achieving very high IOPS with 10 Gigabit or faster interconnect.

I think their "Other distributed filesystem" section does not answer this question.

replies(4): >>43717453 #>>43717925 #>>43719471 #>>43721116 #
1. tempest_ ◴[] No.43717453[source]
We have a couple ceph clusters.

If my systems guys are telling me the truth is it a real time sink to run and can require an awful lot of babysitting at times.

replies(2): >>43717486 #>>43717538 #
2. huntaub ◴[] No.43717486[source]
IMO this is the problem with all storage clusters that you run yourself, not just Ceph. Ultimately, keeping data alive through instance failures is just a lot of maintenance that needs to happen (even with automation).
3. _joel ◴[] No.43717538[source]
I admin'd a cluster about 10 years back and it was 'ok' then, around bluestore. One issue was definitely my mistake but it wasn't all that bad.