←back to thread

181 points ekiauhce | 1 comments | | HN request time: 0s | source
Show context
fxtentacle ◴[] No.42224763[source]
This guy clearly failed because he didn't actually do any compression, he just ab-used the filesystem to store parts of the data and then tried to argue that metadata was not data...

But FYI someone else actually managed to compress that exact same data: https://jasp.net/tjnn/2018/1.xhtml

replies(6): >>42224865 #>>42224936 #>>42225046 #>>42225431 #>>42225920 #>>42235599 #
elahieh ◴[] No.42225920[source]
I'm rather skeptical of this claim that the data was compressed in 2018 because there is no further information, apart from a hash value given.

If it's a true claim they must have identified some "non-random" aspect of the original data, and then they could have given more info.

replies(2): >>42231463 #>>42235151 #
1. tugu77 ◴[] No.42231463[source]
Easy.

Save the sha256 hash of original.dat in compressed.dat. The decompressor cats /dev/random until data of the right size comes out with the correct hash.

Now there are two cases.

1. The reconstructed data is actually equal to original.dat. Challenge won, cash in $5000.

2. The reconstructed data differs from original.dat. It has the same hash though, so you found a collision in sha256. World fame.

In either case, win!