←back to thread

161 points belleville | 1 comments | | HN request time: 0.352s | source
Show context
erikerikson ◴[] No.43678060[source]
We have gradient free algorithms: Hebbian learning. Since 1949?
replies(2): >>43678480 #>>43680305 #
sva_ ◴[] No.43680305[source]
That's more a theory/principle, not an algorithm by itself.
replies(2): >>43681931 #>>43682170 #
1. erikerikson ◴[] No.43681931[source]
It is an update rule:

Wij = f(Wij, xi, xj)

The weight of the connection between nodes i and j is modified by a function over the activations or inputs of node i and j.

The are many variants of back propagation too.

Regardless, yes it would be used within a network model such as a Hopfield network.