As it stands, there is nothing demonstrating that this lossy compression doesn't destroy essential information that an LLM would need.
I also have a gut feeling that the average LLM will actually have more trouble with the dense format + the instructions to decode it than a huge human-readable file. Remember, LLMs are trained on internet content, which contains terabytes of textual technical documentation but 0 bytes of this ad-hoc format.
I am happy to be proven wrong on both points (LLMs are also very unpredictable!), but the burden of proof for an extravagant scheme like this lies solely on the author.