←back to thread

346 points swatson741 | 1 comments | | HN request time: 0.001s | source
Show context
nirinor ◴[] No.45790653[source]
Its a nit pick, but backpropagation is getting a bad rep here. These examples are about gradients+gradient descent variants being a leaky abstraction for optimization [1].

Backpropagation is a specific algorithm for computing gradients of composite functions, but even the failures that do come from composition (multiple sequential sigmoids cause exponential gradient decay) are not backpropagation specific: that's just how the gradients behave for that function, whatever algorithm you use. The remedy, of having people calculate their own backwards pass, is useful because people are _calculating their own derivatives_ for the functions, and get a chance to notice the exponents creeping in. Ask me how I know ;)

[1] Gradients being zero would not be a problem with a global optimization algorithm (which we don't use because they are impractical in high dimensions). Gradients getting very small might be dealt with by with tools like line search (if they are small in all directions) or approximate newton methods (if small in some directions but not others). Not saying those are better solutions in this context, just that optimization(+modeling) are the actually hard parts, not the way gradients are calculated.

replies(2): >>45791131 #>>45792438 #
xpe ◴[] No.45791131[source]
Yes. No need to be apologetic or timid about it — it’s not a nit to push back against a flawed conceptual framing.

I respect Karpathy’s contributions to the field, but often I find his writing and speaking to be more than imprecise — it is sloppy in the sense that it overreaches and butchers key distinctions. This may sound harsh, but at his level, one is held to a higher standard.

replies(3): >>45791270 #>>45791604 #>>45791798 #
1. embedding-shape ◴[] No.45791270[source]
> often I find his writing and speaking to be more than imprecise

I think that's more because he's trying to write to an audience who isn't hardcore deep into ML already, so he simplifies a lot, sometimes to the detriment of accuracy.

At this point I see him more as a "ML educator" than "ML practitioner" or "ML researcher", and as far as I know, he's moving in that direction on purpose, and I have no qualms with it overall, he seems good at educating.

But I think shifting the mindset of what the purpose of his writings are maybe help understand why sometimes it feels imprecise.