←back to thread

92 points jxmorris12 | 2 comments | | HN request time: 0.621s | source
Show context
jxmorris12 ◴[] No.43764187[source]
I'm currently a machine learning grad student taking a meta-complexity class and came across this blog post. I found the whole thing very interesting. In particular the idea that some things are uncomputable seems fundamentally unaddressed in ML.

We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data. Clearly there's more to the story here.

replies(2): >>43764826 #>>43765944 #
1. dwohnitmok ◴[] No.43765944[source]
> We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data.

I don't think the assumption is that strong. The assumption is rather that human learning is computable and therefore a machine equivalent of it should be too.

replies(1): >>43768965 #
2. jaza ◴[] No.43768965[source]
> The assumption is rather that human learning is computable

I don't think the assumption is even that strong! The skills that really set us humans above mere machines - i.e. causal inference, creativity, critical analysis, self-awareness - I don't think are assumed to be computable (IMHO there's no evidence as yet to suggest otherwise). The only skill that AI really currently possesses, is the ability to apply an ever-more-elaborate statistical aggregate function to data. The assumption is just that anything that can be encoded as data, can be operated on to produce an ever-more-elaborate aggregate result.