←back to thread

268 points prashp | 1 comments | | HN request time: 0.25s | source
Show context
dawnofdusk ◴[] No.39216343[source]
This is one of the cool things about various neural network architectures that I've found in my own work: you can make a lot of dumb mistakes in coding certain aspects but because the model has so many degrees of freedom it can actually "learn away" your mistakes.
replies(1): >>39218043 #
CamperBob2 ◴[] No.39218043[source]
It's also one of the scariest things about NNs. Traditionally, if you had a bug that was causing serious performance or quality issues, it was a safe bet that you'd eventually discover it and fix it. It would fail one test or another, crash the program, or otherwise come up short against the expectations you'd have for a working implementation. Now it's almost impossible to know if what you've implemented is really performing at its best.

IMO the ability for a NN to compensate for bugs and unfounded assumptions in the model isn't a Good Thing in the slightest. Building latent-space diagnostics that can determine whether a network is wasting time working around bugs sounds like a worthwhile research topic in itself (and probably already is.)

replies(3): >>39218802 #>>39222080 #>>39227182 #
1. dawnofdusk ◴[] No.39218802[source]
It is a good thing for deep NNs to be expressive enough to do this, because it is this level of expressivity that let's it find answers to otherwise ill-posed problems. If it were not able to do this there would be no point in using them.

The only thing that is scary is the hype, because this will make people sloppily use deep learning architectures for problems that do not need that level of expressive power, and because deep learning is challenging and not theoretically well understood, there will be little to no attempts made to ensure safe operation/quality assurance of the implemented solution.