←back to thread

306 points slyall | 1 comments | | HN request time: 0.21s | source
Show context
2sk21 ◴[] No.42058282[source]
I'm surprised that the article doesn't mention that one of the key factors that enabled deep learning was the use of RELU as the activation function in the early 2010s. RELU behaves a lot better than the logistic sigmoid that we used until then.
replies(2): >>42059243 #>>42061534 #
sanxiyn ◴[] No.42059243[source]
Geoffrey Hinton (now a Nobel Prize winner!) himself did a summary. I think it is the single best summary on this topic.

  Our labeled datasets were thousands of times too small.
  Our computers were millions of times too slow.
  We initialized the weights in a stupid way.
  We used the wrong type of non-linearity.
replies(3): >>42059572 #>>42076459 #>>42119083 #
1. helltone ◴[] No.42076459[source]
I'm curious and it's not obvious to me: what changed in terms of weight initialisation?