←back to thread

68 points ingve | 1 comments | | HN request time: 0.204s | source
Show context
gerdesj ◴[] No.43711952[source]
Many years ago I looked after a Novell cluster of three hosts with a rather expensive FC connected array. So what - that's pretty normal?

It was the early noughties and a TB was expensive. I wrote a spreadsheet with inputs from the Novell .ocx jobbies. The files were stored on some Novell NSS volu.mes.

I was able to show all states of the files and aggregate stats too.

Nowadays a disc is massive and worrying about compression is daft

replies(3): >>43712426 #>>43712994 #>>43714105 #
1. yjftsjthsd-h ◴[] No.43712426[source]
> Nowadays a disc is massive and worrying about compression is daft

I wouldn't go that far. I've professionally seen storage pools with a compression factor of 2-3x, and it really mattered at that job. For that matter, my home directory on the laptop I'm writing this comment from is sitting around 1.2-1.3x, and that's pretty nice. I dunno if I'd make a whole lot of effort (although if I was getting paid to save money on storing terabytes, it might be worthwhile), but the technology has evolved in ease of use.