←back to thread

181 points ekiauhce | 1 comments | | HN request time: 0.001s | source
Show context
stavros ◴[] No.42225075[source]
Sorry, if you're trying to hustle people by xstging $100 per try, don't catch the sleight of hand in the "multiple files" question, and accept, you were beaten at your own game, fair and square.
replies(1): >>42225444 #
l33t7332273 ◴[] No.42225444[source]
I feel like if the FAQ requires not using filename shenanigans then the slight of hand was illegal the whole way.
replies(2): >>42227040 #>>42250036 #
stavros ◴[] No.42227040[source]
He didn't use filenames, he used files, and if that were illegal, Mike shouldn't have accepted it.
replies(1): >>42231550 #
anamexis ◴[] No.42231550[source]
He does use the filenames. If you change the filenames randomly (such that the files sort differently), it does not work.
replies(2): >>42232150 #>>42232155 #
hombre_fatal ◴[] No.42232155[source]
Not in any significant way. The decompressor could be changed to require you to feed the files into it in the correct order or expect some other sorting.

What you're saying is like saying that you encoded info in filenames because decompress.sh expects a file "compressed.dat" to exist. It's not describing any meaningful part of the scheme.

replies(1): >>42232384 #
anamexis ◴[] No.42232384[source]
The filenames contain information that you need in some way for the scheme to work.

You are combining different parts and inserting a missing byte every time you combine the files. You need to combine the parts in the correct order, and the order is part of the information that makes this work.

If the ordering isn't coming from filenames, it needs to come from somewhere else.

replies(1): >>42232488 #
mhandley ◴[] No.42232488[source]
You could do the same spitting trick but only split at progressively increasing file lengths at the character '5'. The "compression" would be worse, so you'd need a larger starting file, but you could still satisfy the requirements this way and be independent of the filenames. The decompressor would just sort the files by increasing length before merging.
replies(2): >>42232637 #>>42235632 #
gus_massa ◴[] No.42235632[source]
Nice idea, but doesn't this require a linear increase of the length of the partial files and a quadratic size of the original file?

If the length of a file is X, then in the next file you must skip the first X characters and look for a "5" that in average is in the X+128 position. So the average length of the Nth file is 128*N and if you want to reduce C bytes the size of the original file should be ~128C^2/2 (instead of the linear 128*C in the article).

replies(2): >>42237845 #>>42239166 #
1. Dylan16807 ◴[] No.42239166[source]
It does, but that's fine. He only needs to save 150-200 bytes and the file is 3MB. 128*200²/2 is 2.5MB Though I think it would be 256* here? Still, it can be made to work.