Most active commenters
  • erikerikson(4)

←back to thread

161 points belleville | 11 comments | | HN request time: 1.143s | source | bottom
Show context
itsthecourier ◴[] No.43677688[source]
"Whenever these kind of papers come out I skim it looking for where they actually do backprop.

Check the pseudo code of their algorithms.

"Update using gradient based optimizations""

replies(4): >>43677717 #>>43677878 #>>43684074 #>>43725019 #
f_devd ◴[] No.43677878[source]
I mean the only claim is no propagation, you always need a gradient of sorts to update parameters. Unless you just stumble upon the desired parameters. Even genetic algorithms effectively has gradients which are obfuscated through random projections.
replies(3): >>43678034 #>>43679597 #>>43679675 #
1. erikerikson ◴[] No.43678034[source]
No you don't. See Hebbian learning (neurons that fire together wire together). Bonus: it is one of the biologically plausible options.

Maybe you have a way of seeing it differently so that this looks like a gradient? Gradient keys my brain into a desired outcome expressed as an expectation function.

replies(4): >>43678091 #>>43679021 #>>43680033 #>>43683591 #
2. red75prime ◴[] No.43678091[source]
> See Hebbian learning

The one that is not used, because it's inherently unstable?

Learning using locally accessible information is an interesting approach, but it needs to be more complex than "fire together, wire together". And then you might have propagation of information that allows to approximate gradients locally.

replies(1): >>43678117 #
3. erikerikson ◴[] No.43678117[source]
Is that what they're teaching now? Originally it was not used because it was believed it couldn't learn XOR (it can [just not as perceptrons were defined]).

Is there anyone in particular whose work focuses on this that you know of?

replies(1): >>43679247 #
4. yobbo ◴[] No.43679021[source]
If there is a weight update, there is a gradient, and a loss objective. You might not write them down explicitly.

I can't recall exactly what the Hebbian update is, but something tells me it minimises the "reconstruction loss", and effectively learns the PCA matrix.

replies(2): >>43680272 #>>43682329 #
5. ckcheng ◴[] No.43679247{3}[source]
Oja's rule dates back to 1982?

It’s Hebbian and solves all stability problems.

https://en.wikipedia.org/wiki/Oja's_rule

6. HarHarVeryFunny ◴[] No.43680033[source]
Even with Hebbian learning, isn't there a synapse strength? If so, then you at least need a direction (+/-) if not a specific gradient value.
replies(1): >>43682035 #
7. orbifold ◴[] No.43680272[source]
Not every vector field has a potential. So not every weight update can be written as a gradient.
replies(1): >>43682930 #
8. erikerikson ◴[] No.43682035[source]
Yes there is a weight on every connection. At least when I was at it gradients were talked about in reference to the solution space (e.g. gradient descent). The implication is that there is some notion of what is "correct"for some neutron to have output and then we bend it to our will by updating the weight. In Hebbian learning there isn't a notion of correct activation, just a calculation over the local environment.
9. erikerikson ◴[] No.43682329[source]
> loss objective

There is no prediction or desired output, certainly explicit. I was playing with those things in my work to try and understand how our brains cause the emergence of intelligence rather than solve some classification or related problem. What I managed to replicate was the learning of XOR by some nodes and further that multidimensional XORs up to the number of inputs could be learned.

Perhaps you can say that PCAish is the implicit objective/result but I still reject that there is any conceptual notion of what a node "should" output even if iteratively applying the learning rule leads us there.

10. yobbo ◴[] No.43682930{3}[source]
True.
11. srean ◴[] No.43683591[source]
Nope that update with the rank one update is exactly the projected gradient of the reconstruction loss. That's not the way it is usually taught. So Hebbian learning was an unfortunate example.

Gradient descent is only one way of searching for a minima, so in that sense it is not necessary, for example, when one can analytically solve for the extrema of the loss. As an alternative one could do Monte Carlo search instead of gradient descent. For a convex loss that would be less efficient of course.