←back to thread

235 points tosh | 1 comments | | HN request time: 0.258s | source
Show context
xanderlewis ◴[] No.40214349[source]
> Stripped of anything else, neural networks are compositions of differentiable primitives

I’m a sucker for statements like this. It almost feels philosophical, and makes the whole subject so much more comprehensible in only a single sentence.

I think François Chollet says something similar in his book on deep learning: one shouldn’t fall into the trap of anthropomorphising and mysticising models based on the ‘neural’ name; deep learning is simply the application of sequences of operations that are nonlinear (and hence capable of encoding arbitrary complexity) but nonetheless differentiable and so efficiently optimisable.

replies(12): >>40214569 #>>40214829 #>>40215168 #>>40215198 #>>40215245 #>>40215592 #>>40215628 #>>40216343 #>>40216719 #>>40216975 #>>40219489 #>>40219752 #
phkahler ◴[] No.40215628[source]
>> one shouldn’t fall into the trap of anthropomorphising and mysticising models based on the ‘neural’ name

And yet, artificial neural networks ARE an approximation of how biological neurons work. It is worth noting that they came out of neurobiology and not some math department - well at least in the forward direction, I'm not sure who came up with the training algorithms (probably the math folks). Should they be considered mystical? No. I would also posit that biological neurons are more efficient and probably have better learning algorithms than artificial ones today.

I'm confused as to why some people seem to shun the biological equivalence of these things. In a recent thread here I learned that physical synaptic weights (in our brains) are at least partly stored in DNA or its methylation. If that isn't fascinating I'm not sure what is. Or is it more along the lines of intelligence can be reduced to a large number of simple things, and biology has given us an interesting physical implementation?

replies(4): >>40215780 #>>40216482 #>>40221293 #>>40221474 #
xanderlewis ◴[] No.40216482[source]
As the commenter below mentions, the biological version of a neuron (i.e. a neuron) is much more complicated than the neural network version. The neural network version is essentially just a weighted sum, with an extra layer of shaping applied afterwards to make it nonlinear. As far as I know, we still don’t understand all of the complexity about how biological neurons work. Even skimming the Wikipedia page for ‘neuron’ will give you some idea.

The original idea of approximating something like a neuron using a weighted sum (which is a fairly obvious idea, given the initial discovery that neurons become ‘activated’ and they do so in proportion to how much the neurons they are connected to are) did come from thinking about biological brains, but the mathematical building blocks are incredibly simple and are hundreds of years old, if not thousands.

replies(1): >>40216693 #
naasking ◴[] No.40216693[source]
> the biological version of a neuron (i.e. a neuron) is much more complicated than the neural network version

This is a difference of degree not of kind, because neural networks are Turning complete. Whatever additional complexity the neuron has can itself be modelled as a neural network.

Edit: meaning, that if the greater complexity of a biological neuron is relevant to its information processing component, then that just increases the number of artificial neural network neurons needed to describe it, it does not need any computation of a different kind.

replies(3): >>40217202 #>>40218071 #>>40221336 #
xanderlewis ◴[] No.40217202[source]
PowerPoint is Turing complete. Does that mean PowerPoint should be regarded as being biological or at least neuroscience-inspired?
replies(1): >>40217352 #
naasking ◴[] No.40217352[source]
No, but neural networks literally were inspired by biology so I'm not sure what your point is.
replies(1): >>40217999 #
xanderlewis ◴[] No.40217999[source]
My point is that you seem to think neurons in the sense of artificial neural networks and neurons in the human brain are equivalent because:

(1) Neural networks are Turing complete, and hence can do anything brains can. [debatable anyway; We don’t know this to be the case since brains might be doing more than computation. Ask a philosopher or a cognitive scientist. Or Roger Penrose.]

(2) Neural networks were very loosely inspired by the idea that the human brain is made up of interconnected nodes that ‘activate’ in proportion to how other related nodes do.

I don’t think that’s nearly enough to say that they’re equivalent. For (1), we don’t yet know (and we’re not even close), and anyway: if you consider all Turing complete systems to be equivalent to the point of it being a waste of time to talk about their differences then you can say goodbye to quite a lot of work in theoretical computer science. For (2): so what? Lots of things are inspired by other things. It doesn’t make them in any sense equivalent, especially if the analogy is as weak as it is in this case. No neuroscientist thinks that a weighted sum is an adequate (or even remotely accurate) model of a real biological neuron. They operate on completely different principles, as we now know much better than when such things were first dreamed up.

replies(1): >>40218254 #
naasking ◴[] No.40218254[source]
The brain certainly could be doing super-Turing computation, but that would overturn quite a bit of physics seeing as how not even quantum computers are more powerful than Turing machines (they're just faster on some problems). Extraordinary claims and all that.

As for equivalency, that depends on how that's defined. Real neurons would not feature any more computational power than Turing machines or artificial neural networks, but I never said it would be a waste of time to talk about their differences. I merely pointed out that the artificial neural network model is still sufficient, even if real neurons have more complexity.

> No neuroscientist thinks that a weighted sum is an adequate (or even remotely accurate) model of a real biological neuron

Fortunately that's not what I said. If the neuron indeed has more relevant complexity, then it wouldn't be one weighted sum = one biological neuron, but one biological neuron = a network of weighted sums, since such a network can model any function.

replies(1): >>40218343 #
1. xanderlewis ◴[] No.40218343[source]
The original comment you were in defence of was suggesting that artificial neurons were somehow very close to biological ones, since supposedly that’s where their inspiration came from.

If you’re interested in pure computational ‘power’, then if the brain is nothing more than a Turing machine (which, as you agree, it might not be), fine. You can call them ‘equivalent’. It’s just not very meaningful.

What’s interesting about neural nets has nothing to do with what they can compute; indeed they can compute anything any other Turing machine can, and nothing more. What’s interesting is how they do it, since they can ‘learn’ and hence allow us to produce solutions to hard problems without any explicit programming or traditional analysis of the problem.

> that would overturn quite a bit of physics

Our physics is currently woefully incomplete, so… yes. That would be welcome.