←back to thread

251 points slyall | 1 comments | | HN request time: 0s | source
Show context
2sk21 ◴[] No.42058282[source]
I'm surprised that the article doesn't mention that one of the key factors that enabled deep learning was the use of RELU as the activation function in the early 2010s. RELU behaves a lot better than the logistic sigmoid that we used until then.
replies(2): >>42059243 #>>42061534 #
sanxiyn ◴[] No.42059243[source]
Geoffrey Hinton (now a Nobel Prize winner!) himself did a summary. I think it is the single best summary on this topic.

  Our labeled datasets were thousands of times too small.
  Our computers were millions of times too slow.
  We initialized the weights in a stupid way.
  We used the wrong type of non-linearity.
replies(1): >>42059572 #
imjonse ◴[] No.42059572[source]
That is a pithier formulation of the widely accepted summary of "more data + more compute + algo improvements"
replies(1): >>42059591 #
1. sanxiyn ◴[] No.42059591{3}[source]
No, it isn't. It emphasizes importance of Glorot initialization and ReLU.