To add to this.
Both the Bayesian vs frequentist interpretations make understanding the problem challenging, as both are powerful interpretations to find the needle in the haystack, when the problem is finding the hay in the haystack.
A better lens is that a recursive binary sequence (coin flips) is an algorithmically random sequence if and only if it is a Chaitin's number.[1]
Chaitin's number is normal, which is probably easier understood with decimal digits meaning that with any window size, over time the distribution, the distribution of 0-9 will be the same.
This is why HALT ≈ open frame ≈ system identification ≈ symbol grounding problems.
Probabilities are very powerful for problems like The dining philosophers problem or the Byzantine generals problem, they are still grabbing needles every time they reach into the hay stack.
Pretty much any almost all statement is a hay in the haystack problem. For example almost all real numbers are normal, but we have only found a few.
We can construct them, say with .101010101 in base 2 .123123123123 in base 3 etc...but we can't access them.
Given access to the true reals, you have 0 percent chance of picking a computable number, rational, etc... but a 100% chance of getting a normal number or 100% chance of getting an uncomputable number.
Bayesian vs frequentist interpretations allow us to make useful predictions, but they are the map, not the territory.
Bayesian iid data and Frequentist iid random variables play the exact similar roles Enthalpy, Gibbs free energy, statistical entropy, information theory entropy, Shannon Entropy etc...
The difference between them is the independent variables that they depend on and the needs of the model they are serving.
You can also approach the property that people often want to communicate when using the term entropy as effective measure 0 sets, null cover, martingales, kolmogorov complexity, compressibility, set shattering, etc...
As a lens, null cover is most useful in my mind, as a random real number should not have any "uncommon" properties, or look more like the normal reals.
This is very different from statistical methods, or any effective usable algorithm/program, which absolutely depend on "uncommon" properties.
Which is exactly the hay in the problem of finding the hay haystack problem, hay is boring.
[1]https://www.cs.auckland.ac.nz/~cristian/samplepapers/omegast...