> With this offer, you can tune your algorithm to my data.
One can generate a 1GB file or 10GB file. It is highly likely that there is some form of a repeatable pattern in there to shave off 50-100 bytes by sliding window search.
Then the decompressor is essentially - at this index, expand this pattern. The compressed file excludes the range of pattern.
One may not always get such a pattern, but on multiple tries it’s very feasible. Payout just needs one win in 50 tries.
You could generate a 100GB file. The bigger the file the higher the chance of repeating pattern.
The challenge is won if compressed_file + decompressor is less one byte than original file.
One could have a self executing decompressor to save some file overhead bytes.