Also see https://www.phoronix.com/news/Josef-Bacik-Leaves-Meta
All the anecdotes I see tend to be “my drive didn’t mount, and I tried nothing before giving up because everyone knows BTRFS sux lol”. My professional experience meanwhile is that I’ve never once been able to not (very easily!) recover a BTRFS drive someone else has given up for dead… just by running its standard recovery tools.
Disaster recovery isn't obvious on any setup I've worked with. I have to RTFM to understand each system's idiosyncrasies.
The idea that some filesystems have no bugs is absurd. The idea that filesystems can mitigate all bugs in drive firmware or elsewhere in the storage stack is also absurd.
My anecdota: hundreds of intentional sabotaging of Btrfs while writing, single drive and raid1 configurations, and physically disconnecting a drive. Not one time have I encountered an inconsistent filesystem or data loss once it was on stable media. Not one. It always mounted without needing a filesystem check. This is on consumer hardware.
There's always some data loss in the single drive case no matter the filesystem. Some of the data or fs metadata isn't yet persistent. Raid1 helps with that, because so long as the hardware problem that affected the 1st drive is isolated, the data is written to stable media.
Of course, I'm no more a scientific sample than you are. And also my tests are rudimentary compared to the many thousands of synthetic tests fstests performs on Linux filesystems, both generic and fs specific, every cycle. But it is a real world test, and suggests no per se problem that inevitably means data loss as you describe.