←back to thread

77 points eatonphil | 4 comments | | HN request time: 0.862s | source
Show context
0cf8612b2e1e ◴[] No.40371539[source]

  Third, faster and cheaper storage devices mean that it is better to use faster decoding schemes to reduce computation costs than to pursue more aggressive compression to save I/O bandwidth. Formats should not apply general-purpose block compression by default because the bandwidth savings do not justify the decompression overhead.
Not sure I agree with that. Have a situation right now where I am bottlenecked by IO and not compute.
replies(6): >>40372011 #>>40372288 #>>40372399 #>>40372660 #>>40373077 #>>40373820 #
zX41ZdbW ◴[] No.40372660[source]
My point is near the opposite. Data formats should apply lightweight compression, such as lz4, by default because it could be beneficial even if the data is read from RAM.

I have made a presentation about it: https://presentations.clickhouse.com/meetup53/optimizations/

Actually, it depends on the ratio between memory speed, the number of memory channels, CPU speed, and the number of CPU cores.

But there are cases when compression by default does not make sense. For example, it is pointless to apply lossless compression for embeddings.

replies(2): >>40372775 #>>40375798 #
1. Galanwe ◴[] No.40372775[source]
Last I checked you can't get much better than 1.5GB/s per core with LZ4 (from RAM), up to a maximum ratio < 3:1, and multicore decompression is not really possible unless you manually tweak the compression.

The benchmarks above that are usually misleading, because they assume no dependence between blocks, which is nuts. In real scenarios, blocks need to be parsed, depend on their previous blocks, and you need to carry around that context.

My RAM can deliver close to 20GB/s, and my SSD 7GB/s, and that is all commodity hardware.

Meaning unless you have quite slow disks, you're better off without compression.

replies(1): >>40372899 #
2. riku_iki ◴[] No.40372899[source]
> Last I checked you can't get much better than 1.5GB/s per core with LZ4

you can partition your dataset and process each partition on separate core, which will produce some massive XX or even XXX GB/s?

> up to a maximum ratio < 3:1

this is obviously depends on your data pattern. If it is some low cardinality IDs, they can be compressed by ratio 100 easily.

replies(1): >>40373006 #
3. Galanwe ◴[] No.40373006[source]
> you can partition your dataset and process each partition on separate core, which will produce some massive XX or even XXX GB/s?

Yes, but as I mentioned:

> multicore decompression is not really possible unless you manually tweak the compression

That is, there is no stable implementation out there that does it. You will have to do that manually and painfully. In which case, you're opening the doors for exotic/niche compression/decompression, and there are better alternatives than LZ4 if you're in the niche market.

> this is obviously depends on your data pattern. If it is some low cardinality IDs, they can be compressed by ratio 100 easily.

Everything is possible in theory. Yet we have to agree on what is a reasonable expectation. A compression factor of around 3:1 is, from my experience, what you would get from a reasonable compression speed on reasonably distributed data.

replies(1): >>40373170 #
4. riku_iki ◴[] No.40373170{3}[source]
> Yes, but as I mentioned > multicore decompression is not really possible unless you manually tweak the compression

I don't understand your point. Decompression will be applied on separate partitions using separate cores the same way as compression..

> Yet we have to agree on what is a reasonable expectation. A compression factor of around 3:1 is, from my experience

well, my prod database is compressed by ratio 7 (many hundreds billions IDs).