←back to thread

235 points tosh | 1 comments | | HN request time: 1.25s | source
Show context
xanderlewis ◴[] No.40214349[source]
> Stripped of anything else, neural networks are compositions of differentiable primitives

I’m a sucker for statements like this. It almost feels philosophical, and makes the whole subject so much more comprehensible in only a single sentence.

I think François Chollet says something similar in his book on deep learning: one shouldn’t fall into the trap of anthropomorphising and mysticising models based on the ‘neural’ name; deep learning is simply the application of sequences of operations that are nonlinear (and hence capable of encoding arbitrary complexity) but nonetheless differentiable and so efficiently optimisable.

replies(12): >>40214569 #>>40214829 #>>40215168 #>>40215198 #>>40215245 #>>40215592 #>>40215628 #>>40216343 #>>40216719 #>>40216975 #>>40219489 #>>40219752 #
phkahler ◴[] No.40215628[source]
>> one shouldn’t fall into the trap of anthropomorphising and mysticising models based on the ‘neural’ name

And yet, artificial neural networks ARE an approximation of how biological neurons work. It is worth noting that they came out of neurobiology and not some math department - well at least in the forward direction, I'm not sure who came up with the training algorithms (probably the math folks). Should they be considered mystical? No. I would also posit that biological neurons are more efficient and probably have better learning algorithms than artificial ones today.

I'm confused as to why some people seem to shun the biological equivalence of these things. In a recent thread here I learned that physical synaptic weights (in our brains) are at least partly stored in DNA or its methylation. If that isn't fascinating I'm not sure what is. Or is it more along the lines of intelligence can be reduced to a large number of simple things, and biology has given us an interesting physical implementation?

replies(4): >>40215780 #>>40216482 #>>40221293 #>>40221474 #
1. srean ◴[] No.40221293[source]
> And yet, artificial neural networks ARE an approximation of how biological neurons work

For a non-vapid/non-vacuous definition of 'approximation' this is not true at all. It is well understood that (i) back-propagation is biologically infeasible in the brain (ii) output 'voltage' is a transformed weighted average of the input 'voltage' -- is not how neurons operate. (ii) is in the 'not even wrong' category.

Neurons operate in terms of spikes and frequency and quiescence of spiking. If you are interested any undergrad text in neurobiology will help correct the wrong notions.