←back to thread

92 points jxmorris12 | 1 comments | | HN request time: 0s | source
Show context
jxmorris12 ◴[] No.43764187[source]
I'm currently a machine learning grad student taking a meta-complexity class and came across this blog post. I found the whole thing very interesting. In particular the idea that some things are uncomputable seems fundamentally unaddressed in ML.

We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data. Clearly there's more to the story here.

replies(2): >>43764826 #>>43765944 #
nyrikki ◴[] No.43764826[source]
While I am to old to say what is taught today, this is exactly the map vs territory issue I mentioned in my other comment.

It is all there in what you would have been taught, but hidden because we tend to avoid the hay in the haystack problems and focus on the needles, because we don't have access to the hay.

As an example that can cause huge arguments, if for analysis you used Rudin, go look for an equally binary operator in that book, as every construction of the reals is a measure zero set, it is actually impossible to prove the equality of two real numbers. ZFC uses constructablity, Spivik uses Cauchy sequences etc...

If you look at the paper[1], about the equivalence of PAC/VC dimensionality it is all there, framing learning as a decision problem, set shattering etc.

Richardson's theorm, especially with more recent papers is another lens.

With the idea that all models are wrong, but some are useful, it does seem that curriculums tend to leave these concepts to postgraduate classes, often hiding them or ignoring them, in descriptive complexity theory they should have been front and center IMHO.

Assuming something is computable or learnable is important for finding pathological cases that are useful, don't confuse the map with the territory.

We have enough examples to know the neuronic model is wrong, but the proposed models we have found aren't useful yet, and with what this post provided shows that may always hold.

Physics and other fields make similar assumptions, like physics assumptions that laplacian determinism is true, despite counterexamples.

Gödel, Rice, Turing. etc... may be proven wrong some day, but right now Halt ~= open frame ~= symbol grounding ~= system identification in the general case is the safe bet.

But that doesn't help get work done, or possibly find new math that changes that.

[1] https://dl.acm.org/doi/10.1145/76359.76371

replies(1): >>43767809 #
1. ysofunny ◴[] No.43767809[source]
the decimal point is a lie, big whoop

p-adic numbers will save us. the alternative completion of the rationals we did not know we needed

the only problem is making sense of p-adics in terms of counting seems rather difficult