Weird phrase to hear from a digital carnival scam artist.
It's not interesting to see people play word games with each other in the HN comments, so why is it interesting to see this kind of disingenuous behavior? Trolls have always existed. I don't get what's interesting about it.
But FYI someone else actually managed to compress that exact same data: https://jasp.net/tjnn/2018/1.xhtml
Of course Patrick’s solution occurred to me immediately once he started talking about splitting up the file. :)
Then I got to wondering if it would be possible to compress a large enough file and save a few bytes, in the spirit of what Mike actually wanted. For instance, if you tried encrypting with a bunch of random keys, you’ll find some such that the result ends with n 0s (say), so then your result could be something like encrypted + key + n. Typically you’d expect to find that the length of the encryption key would be roughly the length of the number of 0s, so you wouldn’t actually net any savings. But I think if you searched long enough, you might “get lucky” and find a key that generated more 0s then the length of the key, so you could theoretically win the challenge. Of course you’d have to somehow embed a program which could decrypt and reassemble everything in a tiny amount of space, but it still doesn’t seem impossible like Mike seems to suggest. Perhaps just requires an intractable amount of computing power. :)
I’m not sure if any of that made sense. I think I’m using the wrong terms for things.
If a randomly-generated file happened to contain some redundancy through sheer chance, you could hand-craft a compressor to take advantage of it. This compressor would not work in general for random data, but it could work for this one particular case.
It's a bet worth taking, because the payoff, 50:1 ($5,000 to $100), is pretty good. Play the game 50 times and you might get a file you could compress.
The challenge, then, would be for the person offering the bet to generate a really random file that contained no such redundancy. That might not be easy.
A plausible "decompressor" is at least, say, 30 or 100 bytes, so the random file needs to have 30 bytes less entropy than you expected, which happens with probability X where X << 1/50. Sum over the whole domain of reasonable decompressors, and you still don't get there.
This argument could do with more rigor, but I think it's correct. Give me 100 million to 1 odds, though, and I'll take my chances trying to brute force a compressor.
https://news.ycombinator.com/from?site=patrickcraig.co.uk
For another compression challenge that is still ongoing, try "500000€ Prize for Compressing Human Knowledge" (also known as "Hutter Prize"):
https://news.ycombinator.com/item?id=37502329 - Hutter Prize for compressing human knowledge (2023-09-13, 215 comments)
Unfortunately I can't find the article I'm describing here, maybe someone else can? It was a long time ago so I might be misrepresenting it slightly.
Suppose if you shave 1 bit off of a sample bit stream you win and you lose if that fails. Your compressor looks at the first 2 bits and if it starts 11 it starts with 1 and otherwise it starts 0 then the bit stream. Now you have a 1 in 4 chance to compress a random string by 1 bit and. A 3 in 4 chance of making it longer by 1 bit.
It’s ultimately not about compression but what odds are worth it.
For example, the all-zeros file is a member of the set of all random 3 megabyte files, and it would certainly be possible to compress that, if by great good fortune you were lucky enough to receive it - albeit something that is unlikely to ever happen in the possible lifetime of the universe, given current physical theories.
Is it possible to quantify this idea of a 'weak' file more accurately?
Patrick:
I meant can I send you a decompressor and several compressed files whose total file size is less than the original uncompressed file and from which I can regenerate the original uncompressed file
Mike:
Sure.
The decompression program needs to be constrained to itself and the singular file to decompress and can not utilise anything other than basic operating system functions for the purpose of reading and writing the constrained files. Any bytes the program accesses otherwise can be used as part of the total count.
It's very hard to constrain this without some form of chest being possiblemespecially if you have some years to set it up.
I also thought about this, but when thinking more about the argument by the challenge creator, I understood that the data in the OS doesn’t help you.
If you’re given a file with 100 bits, there are 2^100 possibilities. If you want to compress to 99 bits, there are only 2^99 possible compressed outputs. So when generating and trying to compress every 100 bit file, you’re not going to be able to compress half of them. The external data doesn’t help you unless you get to modify it after receiving the file to compress. If you can get patched merged into the Linux kernel, you could use the data, but if the data is fixed, you can’t reference the data with less data than the original file (for some cases atleast).
Edit: actually, when thinking about it: you could actually use the OS as another few bits of out of band information you get for free, which makes it possible again. If you have 8 different OS to choose from, you could use their differences to compress any file. At least theoretically, the 3 bits you gain practically probably aren’t enough to make a difference.
You basically split the file every time you encounter a specific character, and your compressor just combines all files it finds with the character you split by. If you split at every „X“ Char which might occur 1000 times in the file, the compressor only needs a small script which joins all files and puts an „X“ between them, which is less than 1000 bytes. The „trick“ is storing the location of the Xs you removed in the file sizes of the individual files.
But as stated, what you said is not possible. To compress by a single byte, only 1/256 files can be compressible, or you'd have multiple different inputs compressing down to the same output.
Next, say I find some predictive model of the data M, where M is composed of N bytes. Furthermore, let us say that the entropy of the dataset under the model is H bytes.
Then, if N + H < D, my model compresses the data.
It doesn't matter if the model is deterministic or probabilistic: a probabilistic model can be used to (losslessly) compress a dataset with entropy coding.
One more argument for compression being equivalent to intelligence: Across many fields of statistical machine learning, there are generalization bounds which have the form:
test error <= train error + model complexity
That is, we don't expect to do any worse on new (test) data, than the sum of the train error and the model complexity (smallest compressed size). Notice that if you interpret the train error as the entropy of the data (i.e. error under a cross entropy loss), then the models which satisfy the statistical generalization bound correspond to those which best compress the data. In other words: the model which produces the shortest description of the data is the one which is expected to generalize best.
He obviously knows that. He knows exactly what compression is and that his solution is not it. He showed (successfully) that meeting the challenge as it was worded did not require compression, which the setter of the challenge didn't realize.
Likely files for a real human workload are like that, but if "most inputs" is talking about the set of all possible files, then that's a whole different ball game and "most inputs" will compress very badly.
> unlikely files become bigger
Yes, but when compressors can't find useful patterns they generally only increase size by a small fraction of a percent. There aren't files that get significantly bigger.
However, this restriction is too much for the purposes of this challenge. We don’t actually need a file with low entropy, in fact I claim that a weak file exists for files with entropy 8 (the maximum entropy value) - epsilon for each epsilon > 0.
What we actually need is a sufficiently large chunk in a file to have low entropy. The largeness is in absolute terms, not relative terms.
A very simple file would be taking a very large file with maximum entropy and adding 200 0’s to the end. This would not decrease the entropy of the file much, but it gives way to a compression algorithm that should be able to save ~100 bytes
You can formalize this with rate--distortion theory, by defining a distortion function that says what your objective is. That implies a well-defined complexity relative to that distortion function.
Okay to be clear, I've written a paper on exactly this topic which will be announced in a week or so. So you won't find anything on the subject yet, haha. But I use almost exactly this example.
Treat an n bit file as a polynomial over the finite field with characteristic 2. Now, there are some irreducible polynomials in this field, but many polynomials have factors of x and of (x+1). Factor the polynomial into P(x)x^n (x+1)^m. and just collect these terms, storing only P, n, and m.
If it's a true claim they must have identified some "non-random" aspect of the original data, and then they could have given more info.
>Okay to be clear, I've written a paper on exactly this topic which will be announced in a week or so. So you won't find anything on the subject yet, haha. But I use almost exactly this example.
I would use Floating Point arithmetic as the example/analogy: one trades off accuracy/precision for exactness/correct-ness when in the extremities. Answers near the more granular representations will be only be able to represented by their nearest value. If this is forsee-able, the floating point implementation can be adjusted to change where the floating point's "extremities'" are.I used the above analogy and the following when articulating the magnitude of near-lossless-ness that large LLM's have managed to attain, especially when all of the humanities corpus is compressed into a USB flash drive; the Kolmogorov complexity re-mitted/captured is similar to a master-piece like the Mona Lisa having to be described in X many brush-strokes to achieve Y amount of fidelity.
> With this offer, you can tune your algorithm to my data.
One can generate a 1GB file or 10GB file. It is highly likely that there is some form of a repeatable pattern in there to shave off 50-100 bytes by sliding window search.
Then the decompressor is essentially - at this index, expand this pattern. The compressed file excludes the range of pattern.
One may not always get such a pattern, but on multiple tries it’s very feasible. Payout just needs one win in 50 tries.
You could generate a 100GB file. The bigger the file the higher the chance of repeating pattern.
The challenge is won if compressed_file + decompressor is less one byte than original file.
One could have a self executing decompressor to save some file overhead bytes.
The $5000 Compression Challenge - https://news.ycombinator.com/item?id=9163782 - March 2015 (168 comments)
The $5000 Compression Challenge - https://news.ycombinator.com/item?id=5025211 - Jan 2013 (61 comments)
The $5000 Compression Challenge - https://news.ycombinator.com/item?id=4616704 - Oct 2012 (119 comments)
The crux of the ongoing debate over the post has to do with the ambiguity of the stated requirements.
For there to be any rational discussion, you need to have proper unique definitions for what is being communicated. Without a definition (identity) no system of truth seeking will provide a definitive truth.
Most notably, "compression" has many different meanings currently in practical use. For example many people use the word compression and simply mean that the uncompressed size is more than the compressed size; lossy vs lossless doesn't matter when its good enough, all that matters is its been reduced, and an example of that might be H264 video.
Other more rigorous individuals will use it to mean entropy encoding, which has a strict definition, part of which is lossless, and in these cases they often view Shannon's theorems as gospel, and at that point they often stop thinking and may not try to sharpen their teeth on a seemingly impossible task.
These theorems are often very nuanced, with the bounds very theoretical or still undiscovered, though still seemingly correct. When someone is told something is impossible, they don't fully apply themselves to try to find a solution. Its a worn path that leads nowhere, or so they imagine.
This mindset of learned helplessness prevents a nuanced intuitive understanding of such problems and any advances stagnate. People take the impossibility at face value, and overgeneralize it (without that nuanced understanding), this is fallacy and it can be very difficult to recognize it when there is no simple way to refute it on its face.
Here is something to consider. Shannon's source coding theorem relies on the average uncertainty of probabilities as a measure of entropy (information). Some exploration is needed to understand what makes this true, or where it fails. It has been my experience that only a very few exceptional people ever attempt this process.
The main question that jumps out in my mind since compression has been a thing I've been interested in for years; is, can an average entropy in statistics tell us accurately whether this is true in systems with probabilities and distributions that are a mix of mathematical chaos but also contain regularity within their domains? Statistics can be unwieldy beasts, and act as regular sources of misunderstanding and falsehood without a very rigorous approach.
For a similar problem, people may find this article particularly relevant with regards to the chaotic 3-body problem in astrophysics.
https://www.aanda.org/articles/aa/full_html/2024/09/aa49862-...
To some extent, this resembles the approach of "hiding the data in the decompressor". But the key difference here is that you can make it less obvious by selecting a particular programming language capable of universal computation, and it is the choice of language that encodes the missing data. For example, suppose we have ~17k programming languages to choose from—the language selection itself encodes about 14 bits (log2(17000)) of information.
If there are m bits of truly random data to compress and n choices of programming languages capable of universal computation, then as n/m approaches infinity, the probability of at least one language being capable of compressing an arbitrary string approaches 1. This ratio is likely unrealistically large for any amount of random data more than a few bits in length.
There's also the additional caveat that we're assuming the set of compressible strings is algorithmically independent for each choice of UTM. This certainly isn't the case. The invariance theorem states that ∀x|K_U(x) - K_V(x)| < c for UTMs U and V, where K is Kolmogorov complexity and c is a constant that depends only on U and V. So in our case, c is effectively the size of the largest minimal transpiler between any two programming languages that we have to choose from, and I'd imagine c is quite small.
Oh, and this all requires computing the Kolmogorov complexity of a string of random data. Which we can't, because that's uncomputable.
Nevertheless it's an interesting thought experiment. I'm curious what the smallest value of m is such that we could realistically compress a random string of length 2^m given the available programming languages out there. Unfortunately, I imagine it's probably like 6 bits, and no one is going to give you an award for compressing 6 bits of random data.
Actually this would work perfectly if you knew it was generated in a single pass by a known random number generator and you had tons of time to brute force it.
If the file were generated by a natural source of entropy then forget it.
Or even if modified in a trivial way like adding 1 to every byte.
That sometimes the smallest decompressor (including the key, in your example) will result in overall less space, but usually it will result in more (because the smallest key that generates 10 zeros happens to be 14 digits long, etc.).
And so ultimately the more interesting question becomes about those probabilities, and therefore what is the right entry price for the bet to be even?
This paper from Hutter in 2015 https://arxiv.org/abs/1510.04931 considers AIXI to be "subjective", since the choice of Universal Turing Machine is left unspecified:
> We show that Legg-Hutter intelligence and thus balanced Pareto optimality is entirely subjective, and that every policy is Pareto optimal in the class of all computable environments. This undermines all existing optimality properties for AIXI. While it may still serve as a gold standard for AI, our results imply that AIXI is a relative theory, dependent on the choice of the UTM.
In practice, I wonder what size of file we're talking about that would result in net compression on random data 50% of the time?
I have no intuition whether it's 1 GB, 1 TB, 1 PB, or beyond.
But if you can choose to lose information you can obviously achieve a higher compression score. That's literally what optical & auditory compression exploits. Indeed, we know people generally don't memorize the entire Wikipedia article. Rather they convert what they learn into some internally consistent story that they can then recite at any time & each time they recite it it's even differently worded (maybe memorizing some facts that help solidify the story).
Again, I have no problem with compression and decompression being equated to intelligence provided both are allowed to be lossy (or at least one facet of intelligence). That's because you get to inject structure into the stored representation that may not otherwise exist in the original data and you get to choose how to hydrate that representation. That's why LZMA isn't "more intelligent" than ZIP - the algorithm itself is "smarter" at compression but you're not getting to AGI by working on a better LZMA.
It's also why H264 and MP3 aren't intelligent either. While compression is lossy decompression is deterministic. That's why we can characterize LLMs as "more intelligent" than LZMA even though LZMA compresses losslessly better than LLMs.
I'll have a paper out next week which makes your point precise, using the language of algorithmic rate--distortion theory (lossy compression applied to algorithmic information theory + neural nets).
I think another way of understanding this is through the "Value Equivalence Principle", which points out that if we are learning a model of our environment, we don't want to model everything in full detail, we only want to model things which affect our value function, which determines how we will act. The value function, in this sense, implies a distortion function that we can define lossy compression relative to.
Let s be an algorithmically random string relative to UTM A. Is it the case that there exists some pathological UTM S, such that K(s|S) (the Kolmogorov complexity of s relative to S) is arbitrarily small? I.e. the blank print statement of S produces s. And there always exists such an S for any s?
Is there some way of defining a meta-complexity measure, the complexity of some UTM without a reference UTM? It seems intuitive that although some pathological UTM might exist that can "compress" whichever string you have, its construction appears very unnatural. Is there some way of formalizing this "naturalness"?
Save the sha256 hash of original.dat in compressed.dat. The decompressor cats /dev/random until data of the right size comes out with the correct hash.
Now there are two cases.
1. The reconstructed data is actually equal to original.dat. Challenge won, cash in $5000.
2. The reconstructed data differs from original.dat. It has the same hash though, so you found a collision in sha256. World fame.
In either case, win!
They were lossless last season. :-D
The sophistication of my compression understanding at the time extended to lempel-ziv and huffman and well... that's about it. So I was like, “Okay, I’m not going to create the world’s best compression method on a whim.” So I started thinking: if I can’t compress the file, maybe I can just fetch it from somewhere else.
The first thing I tried was writing a program that searches online by uploading the file onto a public FTP. From there, I could fetch it. Simple enough, right?
Well, here’s the issue with that: there’s no guarantee I’d even have internet connectivity. That’s probably the first thing he’d block.
So, he gives us the file. I think it was something like 10 MB, and when I looked through it, the entropy was high. Like, really high. Standard compression tools like 7Z or rar wanted NOTHING to do with it.
Then I started thinking again. Maybe this guy’s running all the submissions back-to-back. He’s probably processing them in sequence because he’s got tons of entries coming in. So I thought: if I can’t download the file, maybe there's a chance it's already on the machine from somebody's previous simulation - I can find it somewhere on the hard drive. My program might not be a great decompression system, but you know what it is? A fantastic file search.
So, I threw together a program that tree searched the entire disk with a simple checksum to verify I’d found the exact right file. If it matched, I’d just copy the file into the current directory and say, “I’m done.” - though if it found the file too quickly it would continue idling for a while so as to avoid suspicion. (I was also banking on the fact that someone else’s program might’ve run before mine, leaving the file lying around somewhere for me to grab.)
Of course, I couldn’t make this look too obvious. I mean, imagine submitting a tiny C++ program claiming it could compress a 10 MB file into just a few kilobytes. He’d immediately know it was fake.
So, I padded the compressed binary, adding a bunch of random hexadecimal and binary junk to bulk it up to around 5 MB. It was still phenomenal—better than any other method on the market for this type of file—but at least it seemed somewhat plausible. I mean, maybe I’d "discovered" some revolutionary new compression algorithm or something and smashed the weissman score.
The contest went on for nearly a month, and the guy had a leaderboard tracking scores. Most people were hovering just slightly above standard compression rates—around 9 MB. Nothing groundbreaking, though.
So, it’s a Friday night. I finish the program, pack it all up, and send it off. Then I go to bed. Needless to say, I woke up with a quite an excited response from my professor.
Likewise this challenge would have been stronger if the requirement was to compress two provided arbitrary files :P
Are you trying to imply that people need to put up big sums of money when they want to critique someone else's definitions be taken seriously? If yes, I think that's ridiculous. If no, I can't figure out the point of your comment.
But if you change them without making them sort differently, everything is fine. He depends on the order, not the filenames. You could even remove the filenames entirely, as long as you patch the code to account for such a strange environment.
What you're saying is like saying that you encoded info in filenames because decompress.sh expects a file "compressed.dat" to exist. It's not describing any meaningful part of the scheme.
Which is basically what he’s done here. He’s cheating by marking the location of the repeating sequence using unique files, rather than some other actually more efficient location system.
This is a fun discussion! I think it helps to make compression feel more approachable for armchair enthusiasts.
You are combining different parts and inserting a missing byte every time you combine the files. You need to combine the parts in the correct order, and the order is part of the information that makes this work.
If the ordering isn't coming from filenames, it needs to come from somewhere else.
UTM provides no gain.
I think the other way to read it is that you're fundamentally going to have to choose (by your freedom to control your algorithm's design) which of the maps from (2^|x|) -> (2^|compress(x)|) are the ones that actually end up getting no compression (ie |compress(x)| actually > |x| - bc of pigeonhole principle) and which ones are the ones that do get to be called compressed. So you do have an implied kind of lossiness w.r.t. to what parts of the input domain are actually compressed.
One argument: if your method worked, then we could compress nearly any file. This violates the pigeonhole principle.
Another example: for a 4GB file you’d need roughly a 4 byte integer to specify where the repeat started and where the second was, for 8 bytes. Then you need a length byte. Now you need a repeated 10+ bytes sequence to make this compress anything. There are 256^10=2^80 possible such 10 bytes sequence sequences, and only ~2^32 of them in a 4 gb file. So the odds of a repeated are around 1 in 2^50.
Tweak methods and estimate as you wish. It fails.
000 ... 000
000 ... 001
...
111 ... 110
111 ... 111
then you have 2^b rows. Suppose you look at the sub-bitstrings of length k, k < b. They all appear the same number of times, if you count them wherever they appear at any position in across the entire matrix.However, you also know, for sure, that, if a prefix-free code appears in a particular row, that means since it's impossible to overlap with anything else in that row at its span. What does that imply? That the prefix-free codes have a greater 'occupancy percentage' of a single row than all other sub-bitstrings. That means that you must find fewer of them, on average, inside of a single row.
But since we know that all sub-bitstrings appear the same number of times throughout the entire matrix, what else can we deduce? That the prefix-free codes must appear /over more rows / on average, if they cannot appear as many times while looking at bit positions /along the columns/. That means they will occur as a sub-pattern in full-bitstrings more often than typical random sub-patterns.
So weakness here corresponds to the presence of patterns (prefix-free codes) that are:
- non-overlapping within bitstrings
- widely distributed across bitstrings
- due to their wide distribution, there's a higher chance of encountering these patterns in any given random file
- therefore, weak files are more compressible because they contain widely-distributed, non-overlapping patterns that compression algorithms can take advantage of
Seeds are something like 32 bits, though it depends on the exact implementation. But not the length of files.
The challenge (based on expected value) is not to find some compression scheme that will compress 100% or even 50% of files to less than their original size. Instead it's to find any (set of) compression schemes that will compress more than 1/50 = 2% of arbitrary random files to less than their original size. You can construct such a program essentially as they described by asking for a large file size and then searching for a valid decompression program within that file. With standard tools you can make some valid decompression program quite short, which will allow you a reasonable probability that it will appear within a large randomly-generated file.
The cost that you pay for using a compression scheme like this is that where the compression scheme works you'll win some bytes, but in the more common case where it doesn't you'll lose some, even for the simple `cat $1 > original` default decompressor.
Now, a clever contest creator might arbitrarily choose files that win against such decompressors that they can think of, but Mike appears to have not have done so.
Presume this compressor guarantees the output will always be at least one byte smaller (with the exception of the impossible to find input)
So just keep running your data in a loop through the compressor. If you start with a 1MB file, it will take a maximum of a million iterations until the output shrinks down to zero bytes, which is the smallest possible file. Record how many iterations it too.
You can now extract your file by feeding a zero byte file into the decompressor and running the same number of iterations. Which You can now store every 1MB (or smaller) file in the world in just 20 bits.... But that would means there are only 1 million possible 1MB files?
Even if you put some minimum output size limitation on the compressor, say it can't produce any file less than 512 bits, the same argument applies. It's just that the numbers get bigger.
Renegadry aside, for those who are more interested in the Information Theory perspective on this:
Kolmogorov complexity is a good teaching tool, but hardly used in engineering practice because it contains serious foot-guns.
One example of defining K complexity(S, M) is the length of the shortest initial tape contents P for a given abstract machine M such that, when M is started on this tape, the machine halts with final tape contents P+S. Obviously, one must be very careful to define things like “initial state”, “input”, “halt”, and “length”, since not all universal machines look like Turing machines at first glance, and the alphabet size must either be consistent for all strings or else appear as an explicit log factor.
Mike’s intuitive understanding was incorrect in two subtle ways:
1. Without specifying the abstract machine M, the K complexity of a string S is not meaningful. For instance, given any S, one may define an abstract machine with a single instruction that prints S, plus other instructions to make M Turing complete. That is, for any string S, there is an M_S such that complexity(S, M_S) = 1 bit. Alternatively, it would be possible to define an abstract machine M_FS that supports filesystem operations. Then the complexity using Patrick’s solution could be made well-defined by measuring the length of the concatenation of the decompressor P with a string describing the initial filesystem state.
2. Even without adversarial examples, and with a particular M specified, uniform random strings’ K complexity is only _tightly concentrated around_ the strings’ length plus a machine-dependent constant. As Patrick points out, for any given string length, some individual string exemplars may have much smaller K complexity; for instance, due to repetition.
There's a game called Skedaddle, and a Usenet group, rec.players.skedaddle. A well-known player, Mike, offers a $5000 challenge for a feat, called the four-froot-hoot, which he believes is simply impossible to achieve.
A fellow skedaddling enthusiast, Patrick, takes him up on the challenge, some baseline terms are negotiated, and Patrick presents his putative four-froot-hoot. Some might say it meet the terms of the challenge.
Mike objects, he says "you're a rat, Pat! look at the FAQ, Jack, that's not even skedaddle" and indeed, the F.A.Q. of rec.players.skedaddle does indeed state in plain language, that it ain't even skedaddle so it can't be a four-froot-hoot.
You make a bet in the club, you play by the club rules.
There are some formats which search for a valid header anywhere in the file, but then they'd get tripped up by all the mostly-valid headers that don't actually yield a decompressed file.
The only way to win is to reverse the random number generator used to create the contest file, or to exploit some bias if there is one. But if the file is generated properly, most likely with some CSPRNG, good luck going that route.
The contest creator has to use a random file, any attempt to be clever will actually decrease entropy and may result in the contestant winning.
Yes. It’s not even that hard to create. Just take a standard UTM and perform a branching “if” statement to check if the input is the string of interest before executing any other instructions.
> Is there some way of defining a meta-complexity measure, the complexity of some UTM without a reference UTM?
Haha, not that anyone knows of. This is one of the issues with Solomonoff induction as well. Which UTM do we pick to make our predictions? If no UTM is privileged over any other, then some will necessarily give very bad predictions. Averaged over all possible induction problems, no single UTM can be said to be superior to the others either. Solomonoff wrote an interesting article about this predicament a while ago.
(A lot of people will point to the constant offset of Kolmogorov complexity due to choice of UTM as though it somehow trivializes the issue. It does not. That constant is not like the constant in time complexity which is usually safe to ignore. In the case of Solomonoff induction, it totally changes the probability distribution over possible outcomes.)
Some volunteer puts in time maintaining something and makes a claim that is obviously correct and - most likely in jest - promises cash to anyone who shows it to be wrong. Then some rules lawyer decides that he's better than the unpaid volunteer and does his best to humiliate him, just for the lulz.
Is it any surprise that volunteers rarely stick around for long?
Nowadays lots of companies run bug bounty programs where if you find a bug and report it responsibly you can get real cash. This is a valuable service. And I'm positive that these programs get gobs of submissions from people trying to see if they can find a bug in the rules, rather than in the software. If the people vetting the submissions to such programs weren't getting paid for it, I'm sure they would quit in disgust.
It’s a moot point anyway as most programming languages are capable of universal computation.
Again, I’m not claiming we can compress random data. I’m claiming we can (sometimes) make it look like we can compress random data by selecting a specific programming language from one of many possibilities. This selection itself constitutes the “missing” information.
Kolmogorov complexity is useless as an objective measure of complexity.
The "it ain't even skedaddle" aspect is just incorrect. Compression is a term of art, and it is very general. Saying that "hiding" things in the filesystem is cheating is no different than saying that using the previous frame to reduce the size of the following frame in a video compressor is cheating. Yes, you do have to be careful of exactly what your inputs are, but there is great value in taking advantage of additional forms of input even if they might be initially unexpected.
Besides, Mike's smugness deserves some comeuppance.
Edit: I loved this (minus the missing ending). Really sucked me in with the level of detail.
In that case Mike, the contest creator, may declare the result invalid because you haven't satisfied the intent of the challenge.
Now lets look at how modern compression actually: works. Images, movies, text-documents, audio files... there is fundamental --structure-- in all this data. In text it might be using only a certain character range, a certain lexical word list, and so on. There may be geometric information in images and movies that could be turned into code to drastically cut down file size. With audio -- you can sample it and discard what you don't need. And so on and so fourth. Structure. But how do you apply any of these methods to random data? There is no structure, you can't sample it, reduce it, simplify it... Every bit has meaning.
So please, dear human thinking about compression of random data. It is an insane waste of intellectual effort. It's impossible. Don't do it. I made the same mistake (and setup vast amounts of HPC to work on algorithms.) COMPLETE. Waste of time.
The trick is the missing character is static, in the 'decompressor'. It's inserted between every segment which was trimmed to end where that character existed in the original.dat file. The numbers at the end of each segment file correspond to the segment number, as bourne shells increment numbers. It does not handle errors or validate the output.
It does fulfill the discussed terms of the challenge, which fail to include any reference to an external set of rules.
With a big enough file those fractions of a bit add up to a non trivial number of bits. You can be cunning about how you encode your deltas too (next delta makes use of remaining unused bits from previous delta).
I haven't worked through all the details, so it might be in the end result everything rebalances to say no, but I'd like to withhold judgement for the moment.
If the length of a file is X, then in the next file you must skip the first X characters and look for a "5" that in average is in the X+128 position. So the average length of the Nth file is 128*N and if you want to reduce C bytes the size of the original file should be ~128C^2/2 (instead of the linear 128*C in the article).
I was responding to:
> chances are the length of the seed is equal to the length of the original file
And why would the chances be that? You'd really have to go out of your way for that. I don't even know if there are libraries that can handle a seed and state length on the scale of megabytes. No, chances are 99.99+% it used a seed of a few bytes, because that's how common random number generators designed for efficiency work.
I wouldn’t go so far as to say Kolmogorov complexity is useless as an objective complexity measure however. The invariance theorem does provide a truly universal and absolute measure of algorithmic complexity — but it’s the complexity between two things rather than of one thing. You can think of U and V as “representatives” of any two partial recursive functions u(x) and v(x) capable of universal computation. The constant c(u, v) is interesting then because it is a natural number that depends only on the two abstract functions themselves and not the specific Turing machines that compute the functions.
What does that mean philosophically? I’m not sure. It might mean that the notion of absolute complexity for a finite string isn’t a coherent concept, i.e., complexity is fundamentally a property of the relationship between things rather than of a thing.
lim n->\infty K(X|n)/n
Possible solutions that come tom mind:
1) UTMs are actually too powerful, and we should use a finitary abstraction to have a more sensible measure of complexity for finite strings.
2) We might need to define a kind of "relativity of complexity". This is my preferred approach and something I've thought about to some degree. That is, that we want a way of describing the complexity of something relative to our computational resources.
Top 10 repeated byte patterns and their counts:
Bytes: 7eda16, Count: 3
Bytes: 65b1a4, Count: 3
Bytes: 71d745, Count: 3
Bytes: b72808, Count: 2
Bytes: 60e3ee, Count: 2
Bytes: 6e9152, Count: 2
Bytes: 26446b, Count: 2
Bytes: e4a05a, Count: 2
Bytes: 67f86a, Count: 2
Bytes: 92c487, Count: 2
Since the most common three byte sequence only appears 3 times, it seems like a non-starter. No longer byte sequences appeared more than twice either.I generated the random digits with this:
dd if=/dev/random of=random.bin bs=1024 count=1440
Related to compression this is, as I said, irrelevant. Kolmogorov complexity, in the first order naive sense, picks a machine then talks about length. In the complete sense, due to Kolmorogov invariance which extends the naive version to any possible encoding, it shows that there is a fundamental minimal length over any machine Then one proves that the vast majority of strings are incompressible. Withtout this machine invariance there could be no notion of inherent Kolmorogov complexity of a string.
This has all been known since the 1960s.
So no, it matters not which UTM, FSM, TM, Java, Hand coded wizard machine, or whatever you choose. This line of reasoning has been investigated decades ago and thrown out since it does not work.
Your claim
>then there exist an infinite number of UTMs that will compress any specific string
combined with your claim
> For example, suppose we have ~17k programming languages to choose from—the language selection itself encodes about 14 bits (log2(17000)) of information.
Does not let you use that information to compress the string you wish to compress, unless you're extremely lucky that the string matches the particular information in the language choice matches, say, the string prefix so you know how to use the language encoded information to apply to the string. Otherwise you need to also encode the information of how the language choice maps to the prefix (or any other part), and that information is going to be as large or larger than what you want. Your program, to reconstruct the data you think language choice hides, will have to restore that data - how does your program choice exactly do this? Does it list all 17k languages and look up the proper information? Whoops, once again your program is too big.
So no, you cannot get a free ride with this choice.
By your argument, Kolmogorov complexity (the invariance version over any possible machine) allows compression of any string, which has been proven impossible for decades.
What is the distribution of the complexity of a string? Is there some Chernof-like bound?
I don’t follow. Wouldn’t that be (because not random)
On top of this, it takes more data to store the address of the repeating byte pattern than the original pattern does. Therefore you are creating more data than you are saving.
If it was this easy then hundreds of people would have taken on the challenge and won the £5k.
I'm saying order does matter and it's the only thing that matters about the separate files using this code.
The problem with comp.compression was always that its Eternal September is new people showing up every week claiming they’ve found a universal compression algorithm that compresses everything - the perpetual motion machine of information theory. Having gotten tired of explaining to people who think they’re fighting “dogma” not the laws of the universe, people start trying to find other ways to make them put up or shut up, so they could have some peace and quiet.
Like sending them a file full of RNG output and wait for them to realize this was their teachable moment.
Winning the challenge - without engaging in out of band bullshit (compressor + output should be the giveaway) doesn’t prove you have found a universal compression algorithm. It only proves the RNG is flawed. Which would be very interesting but not break Shannon.
The problem is compressor + file means “no out of band data” to a reasonable person and we have already established we are dealing with unreasonable people.
Not
> My main motivation was to "out-trick the tricker". I thought the chances of me making any money were very remote.
For example, you could store them in a Set object in many programming languages, one that preserves insertion order. Or you could be extracting them one by one from a tar file that has blank filenames stored in it.
4k is 8.3 megapixels, at least 24 bits per pixel, and 24 frames a second about 4.8 Gbps if you include the audio. Netflix streams at 15.6Mbps, which is more than 300:1. We talked here a couple years ago about how HBO redid their “dead channel” intro to make the “white noise” compatible with video compression so it didn’t look like ass.
It would come from however the files were transferred from competitor computer to verifier computer.
> For the tar file, the tar file would be larger than the input file it's supposed to be "compressing".
It sure would be! I don't see how that's relevant to the filename discussion though?
Neither of your ideas work with this.
In terms of just transferring the files in order, then you need to delineate the start and end of each file, which will take more space than the byte you are removing. Same with the tar file.
I think you've fundamentally misunderstood my point.
You seem to think I'm trying to make it work without a trick, but I'm not doing that. I'm saying yes there is a trick, but the trick is not based around filenames. The trick needs a sequence of variable-size blobs of bytes, and there's a lot of ways to maintain a sequence.
If you patched `cat` to ignore filename and just spit out each file as given, the script would still work without a single change. If you slightly changed the script to loop over the results of `ls`, it could still be compatible with scrambled filenames.
A script that didn't cheat would also be using filenames to a similar level.
In other words the filenames are a completely fair implementation detail. And that detail can be swapped out without changing the trick in any meaningful way.
The trick is based on having a series of variable-sized blobs of bytes. That's all it needs. If I use javascript instead of sh, and my decompressor is `[...s].join('5')`, I'm using the same trick.
This isn't true! If you scrambled the filenames, the files would be put together in the wrong order and the result would be incorrect. You would need to also transmit the order that the files would be put together separately, which again, together with the size of the files themselves, would be greater than the size of the output.
The key thing here is that the trick works by storing the information of how the blobs are ordered out-of-band. In the OP, that out-of-band place to store the blob order is filename. In your JS example of `[...s].join('5')`, where does the order of [...s] come from? It's not something you can hand-wave away, it's the key thing that makes the trick work.
2. A FAQ for the newsgroup is not automatically part of the rules for the challenge.
3. If the entire FAQ is treated as rules text, then the rules directly say you cannot win, and that's not an acceptable way to do rules.
I said "could" because you'd have to either do a limited scramble or hotwire ls to use the right order despite the scrambling. Or sort by date or inode, probably.
> The key thing here is that the trick works by storing the information of how the blobs are ordered out-of-band.
Yes. That is the key, not the filenames.
> In the OP, that out-of-band place to store the blob order is filename.
It is, but the actual use of filenames is not a shenanigan, and the blob order could be easily accomplished without any particular filenames.
> In your JS example of `[...s].join('5')`, where does the order of [...s] come from? It's not something you can hand-wave away, it's the key thing that makes the trick work.
It comes from the process of loading the blobs onto the computer. I'm not trying to hand-wave it, I'm saying it doesn't need filenames or anything resembling filenames. Maybe it came from a tar. Maybe I sent each file in a separate email. All that matters is having an order, and having an order happens by default when you have multiple files. As long as you don't go out of your way to reorder things, the trick works.
I guess that's where we disagree. I think you don't have an order by default, you need to explicitly define it, and transmit it, and store it somehow. Which is after all, why it's not true compression. When you account for that metadata, the "compressed" data is not smaller than the original.
In the OP, the cheat was using filenames to store that data. In a tar file, it's using the tar file metadata to store it. In your email, you're storing the email metadata to keep that ordering. In all cases, order is a key thing that you need to explicitly define, transmit, and store. And in all cases, this metadata takes up more space than is saved by the whole scheme.
It's files on a computer. Those always have an order. Acting like there isn't an order takes active work.
> Which is after all, why it's not true compression. When you account for that metadata, the "compressed" data is not smaller than the original.
Yeah sure, I have never disagreed on this.
> In the OP, the cheat was using filenames to store that data.
Let me make my argument extra clear, and you can tell me if you disagree with either point, and exactly how you disagree with that point:
A) OP did not do anything untoward with filenames. They did a simple loop and even threw away the actual contents of the filename. Even code that wasn't cheating would have a similar use of filenames.
B) OP's trick is not "based on" filenames, it's based on having an order. There are many ways to have an order, and their choice of using filenames is very shallowly integrated into their code.
B) OP's trick is not "based on" filenames, it's based on having an order. There are many ways to have an order, and their choice of using filenames is very shallowly integrated into their code.
I think this is a distinction without a difference. OP's trick is based on having an order via filenames. They could have used a different trick that used something besides filenames for ordering, but they didn't.
If instead of asking if he could use multiple files, he asked if he could use a tar file, or email each file separately and specified that they should be fed to the decompressor in order, he would likely have been declined outright because the cheat is more obvious.
Specifically I don't think it rises to the level of violating rules on filenames. That's why I think the distinction can matter.
The line about filename shenanigans being disallowed.
>A FAQ for the newsgroup is not automatically part of the rules for the challenge.
I think it’s completely clear that this FAQ was about the challenge.
And you can get rid of that risk by requiring 100 bytes of shrink. Just measure the size right.
Additionally Mike's behavior during the exchange makes him feel all the more untrustworthy. His condescension, playing the fool and intentionally misinterpreting the test to "win", remarks made that seem specifically placed to needle and aggravate Pat knowing there's no way to force him to pay up, and threats of accusations of fraud were a show of really poor character. At the least it would've been more of a class act (even if he never paid out) to admit that Pat outplayed him due to naivety and a self inflated sense of cleverness on Mike's part, to admit that he is not familiar with betting culture and got in over his head.
I don't think that person was being verbatim or summarizing well, can you find support in the actual FAQ?
The part where the FAQ calls out filenames, the example is lmfjyh.c putting the entire program in the filename. Nothing like that is happening here.
The FAQ does talk about being so strict you can't even get bit length, but that kind of strictness is clearly not being applied to Mike's challenge.
> I think it’s completely clear that this FAQ was about the challenge.
Unless I missed there being two FAQs, you have completely misread the situation.
This is the FAQ: http://www.faqs.org/faqs/compression-faq/part1/index.html
It is not about the challenge.
And once you ban that, it's impossible from an information theoretic point to win the challenge.
Let's step back and consider a different perspective that builds on some key mathematical principles:
Every binary file, regardless of its content or randomness, can be viewed as representing one specific large number. This isn't controversial - it's just a different way of looking at the data.
Many large numbers can be expressed more efficiently using mathematical notation (e.g., 10^100 is far more compact than writing out 1 followed by 100 zeros). Furthermore, this efficiency advantage tends to increase with larger numbers.
These numerical conversions are perfectly lossless and reversible. We can go from binary to decimal and back without losing any information.
This naturally leads to some interesting questions: What happens to our usual compression impossibility proofs when we consider perfect numerical transformations rather than traditional pattern matching? Could mathematical expressions capture patterns that aren't obvious in binary form? As numbers get larger, does the potential for more efficient mathematical representation increase?
The KDP patent explores some of these ideas in depth. I'm not claiming this solves all compression challenges - but I think it highlights how fresh perspectives and mathematical approaches might reveal new ways of thinking about data representation.
Would be curious to hear others' thoughts, especially from those with expertise in number theory or information theory. How do these mathematical properties interact with our traditional understanding of compression limits?
Nothing in the challenge says anything about doing universal compression. In fact the key point I'm trying to make is that you _don't_ have to make a universal compressor or break entropy to win, instead just find some scheme that makes 2% of random files shorter than the original, even if it makes 98% of files much much longer. The overall entropy is increased. I wouldn't use this for anything else. But it does beat the challenge as stated.
I'd agree that Patrick didn't follow the "spirit" of the challenge or do anything interesting with compression. But in doing this I think he made a good point, which is roughly that you need to be very careful and explicit when posing challenges like this or people are going to use your sloppy wording against you.
IMO. the fun part of compression algorithms is that the set of files that become smaller is as narrow as possible while the set of files that become bigger is as big as possible, so _most_ files don't compress well! The trick is to get the set of files that get smaller to be just the useful files and nothing else.
I'd agree that if this was a free entry situation he'd be fully within his rights turning down trolls, rules lawyers, etc. for the same reasons you mentioned. Trying to scam a well intentioned but naive bounty post would be sad behavior. But this guy was clearly taking money on a position he never intended to pay out on and not losing any sleep over it.
And that chance is too low to distinguish from zero before the universe dies.
Any time you say that something can't be done, you will attract people who say "I have found a way to do it!" Even more so if you have a proof that it can't be done. This phenomenon is far older than the internet; the term "morbus cyclometricus" suggests its origins in antiquity.
The FAQ maintainer was not looking to profit off fools. They wanted to chase them off; after spending several pages explaining why the task is impossible it is tedious to explain to a crank (the term generally used for such people, as described by Underwood Dudley) that they are in fact wrong, the task is impossible, and here's why. It's easier to say "Pay me if you want to waste me time" and reasonably assume that no one will do it.
It's similar in spirit to the James Randi challenge. You're not going to win it, because you're not going to prove that math or science is wrong. But the JREF had as its goal to expose the charlatans, and had the time and resources to devote to the task. A newsgroup FAQ maintainer has neither the time nor the resources. So they shouldn't have volunteered? Then you have no volunteers. Hooray for the internet.
Now yes, every once in a great while someone will come along and legitimately do something that was claimed un-doable; the obvious case is George Dantzig. But that case also shows that any scientist or mathematician who can be shown an error in something thought impossible will be thrilled at the discovery, because that is new knowledge, which is the whole point. Poking a hole in the rules of the contest, finding a weasel way around them, is not something interesting at all.
As far as I'm aware of the James Randi challenge doesn't require a participant to pay anything- it's genuinely a challenge and not a bet. It's not taking advantage of idiots to part them from their money and gloat about it over them after.
Because it’s out of context. It also makes the winner seem more sympathetic, which is why it’s not accident it’s not provided. It’s the Missing, Missing Reason.
If you have $100 in disposable income it might be worth the lark if the officiant is uncareful with their chosen text. Though odds are good he pulled it from random.org. That’s where we used to send people.